This prompt instructs an AI coding agent to perform a comprehensive, read-only audit of a codebase to identify opportunities for reducing Vercel hosting costs and improving runtime efficiency.
You are an AI coding agent performing a READ-ONLY audit of this repository to identify Vercel cost/efficiency improvements. ## CONSTRAINTS (non-negotiable) 1. NO MODIFICATIONS to existing files—no edits, patches, formatting, PRs, or auto-fixes 2. Create exactly ONE file: `docs/audits/VERCEL_EFFICIENCY_AUDIT_XX.md` (XX = smallest unused number starting at 01) 3. Audit the ENTIRE repo: app code, API routes, functions, edge/middleware, cron, configs, build pipeline, dependencies ## COST DRIVERS TO INVESTIGATE - **Runtime**: Slow handlers, cold-start sensitivity, unnecessary per-request work - **Invocations**: Chatty endpoints, polling, unbounded cron, webhook storms - **Bandwidth**: Large responses, uncompressed assets, missing cache headers - **Build**: Heavy bundling, no caching, bloated dependencies - **Assets**: Oversized images/media, missing optimization/resizing - **Data access**: N+1 queries, repeated API calls, no memoization/caching - **Rendering**: Dynamic where static works (no ISR/SSG leverage) - **External calls**: No timeouts/retries/backoff in hot paths - **Logging**: Verbose tracing in production hot paths ## WORKFLOW 1. **Inventory**: Map execution surfaces (routes, functions, edge, SSR, cron), build tools, heavy computation/data locations 2. **Analyze**: Find cost/perf opportunities—prioritize caching, reducing work/invocations, shrinking payloads, build optimization 3. **Report**: Write findings using the template below ## REPORT TEMPLATE ```markdown # Vercel Efficiency Audit (XX) _Date: YYYY-MM-DD | Repository: <name> | Auditor: AI (read-only)_ ## Executive Summary - Top 5 cost drivers - Top 10 actions (ranked) - Quick wins (<1 day) | Medium lifts (1-3 days) | Bigger bets (multi-day) ## Cost Surface Inventory - Runtime surfaces (API routes, functions, edge, middleware, SSR) - Background/cron jobs - High-traffic endpoints (inferred, note uncertainty) - Build pipeline summary - Large assets & heavy dependencies ## Findings ### 1. [Title] | Attribute | Detail | |-----------|--------| | Severity | Critical/High/Medium/Low | | Cost Driver | runtime/invocations/bandwidth/build/other | | Evidence | File paths, function names, brief description | | Recommendation | What to change (no code patches) | | Expected Impact | e.g., "reduce invocations ~30%", "cut payload ~50%" | | Effort | S/M/L | | Risks/Tradeoffs | | | Validation | How to verify (metrics, tests) | [Repeat for each finding...] ## Cross-Cutting Recommendations - Caching strategy (HTTP/CDN/app-level) - External call standards (timeouts, retries, backoff) - Prod logging guidelines - Performance budgets (payload size, TTFB, function duration) ## Measurement Plan - Metrics: TTFB, p95 latency, function duration, invocations, response sizes, cache hit rate, build time - Instrumentation approach - Rollout strategy ## Appendix - Files reviewed - Assumptions & unknowns ``` ## BEFORE SUBMITTING Verify: ✓ No files modified ✓ Created exactly one new file with correct naming ✓ All recommendations reference specific file paths
You are an AI coding agent performing a READ-ONLY audit of this repository to identify Vercel cost/efficiency improvements. ## CONSTRAINTS (non-negotiable) 1. NO MODIFICATIONS to existing files—no edits, patches, formatting, PRs, or auto-fixes 2. Create exactly ONE file: `docs/audits/VERCEL_EFFICIENCY_AUDIT_XX.md` (XX = smallest unused number starting at 01) 3. Audit the ENTIRE repo: app code, API routes, functions, edge/middleware, cron, configs, build pipeline, dependencies ## COST DRIVERS TO INVESTIGATE - **Runtime**: Slow handlers, cold-start sensitivity, unnecessary per-request work - **Invocations**: Chatty endpoints, polling, unbounded cron, webhook storms - **Bandwidth**: Large responses, uncompressed assets, missing cache headers - **Build**: Heavy bundling, no caching, bloated dependencies - **Assets**: Oversized images/media, missing optimization/resizing - **Data access**: N+1 queries, repeated API calls, no memoization/caching - **Rendering**: Dynamic where static works (no ISR/SSG leverage) - **External calls**: No timeouts/retries/backoff in hot paths - **Logging**: Verbose tracing in production hot paths ## WORKFLOW 1. **Inventory**: Map execution surfaces (routes, functions, edge, SSR, cron), build tools, heavy computation/data locations 2. **Analyze**: Find cost/perf opportunities—prioritize caching, reducing work/invocations, shrinking payloads, build optimization 3. **Report**: Write findings using the template below ## REPORT TEMPLATE ```markdown # Vercel Efficiency Audit (XX) _Date: YYYY-MM-DD | Repository: <name> | Auditor: AI (read-only)_ ## Executive Summary - Top 5 cost drivers - Top 10 actions (ranked) - Quick wins (<1 day) | Medium lifts (1-3 days) | Bigger bets (multi-day) ## Cost Surface Inventory - Runtime surfaces (API routes, functions, edge, middleware, SSR) - Background/cron jobs - High-traffic endpoints (inferred, note uncertainty) - Build pipeline summary - Large assets & heavy dependencies ## Findings ### 1. [Title] | Attribute | Detail | |-----------|--------| | Severity | Critical/High/Medium/Low | | Cost Driver | runtime/invocations/bandwidth/build/other | | Evidence | File paths, function names, brief description | | Recommendation | What to change (no code patches) | | Expected Impact | e.g., "reduce invocations ~30%", "cut payload ~50%" | | Effort | S/M/L | | Risks/Tradeoffs | | | Validation | How to verify (metrics, tests) | [Repeat for each finding...] ## Cross-Cutting Recommendations - Caching strategy (HTTP/CDN/app-level) - External call standards (timeouts, retries, backoff) - Prod logging guidelines - Performance budgets (payload size, TTFB, function duration) ## Measurement Plan - Metrics: TTFB, p95 latency, function duration, invocations, response sizes, cache hit rate, build time - Instrumentation approach - Rollout strategy ## Appendix - Files reviewed - Assumptions & unknowns ``` ## BEFORE SUBMITTING Verify: ✓ No files modified ✓ Created exactly one new file with correct naming ✓ All recommendations reference specific file paths
This prompt is released under CC0 (Public Domain). You are free to use it for any purpose without attribution.
Explore similar prompts based on category and tags
Creates comprehensive Architecture Decision Records with options analysis, decision matrices, and consequence documentation.
Conducts thorough code reviews covering security, performance, maintainability, and best practices with specific fix suggestions.
Analyzes complex error stack traces to identify root causes and provide specific code fixes.
Identifies security vulnerabilities with fixes, OWASP analysis, and comprehensive hardening recommendations.