Battle-Tested Patterns
Five proven patterns from real client projects that increased velocity 15-30%, with metrics and examples
These patterns are from real client work, with actual metrics.
Pattern 1: The Characterization Test Safety Net
Problem: Need to refactor legacy code, but no tests exist. Too risky.
Solution from Client B project:
Step 1: "Generate characterization tests for @LegacyService.ts
- Lock in current behavior (even if buggy)
- Cover all public methods
- Mock external dependencies
- Aim for 80%+ coverage"
Step 2: Run tests, ensure all pass
Step 3: "Now refactor @LegacyService.ts to:
- Extract validation logic
- Improve error handling
- Add logging
Tests must still pass."
Step 4: Run tests again, verify greenMetrics:
- Before: 0% test coverage, afraid to touch code
- After: 85% coverage, confident refactoring
- Time: 2 hours vs estimated 2 days manual
Key insight: AI excels at generating comprehensive test cases you might miss manually.
Pattern 2: The Progressive Enhancement
Problem: Need to add feature, but unsure of best approach.
Solution from Client A project:
Iteration 1: "Implement basic user preferences:
- Save/load from database
- Just dark mode toggle
- No validation yet"
[Review, test, validate approach]
Iteration 2: "Enhance user preferences:
- Add email notifications toggle
- Add language selection
- Add input validation"
[Review, test, validate]
Iteration 3: "Production-ready:
- Add caching layer
- Add audit logging
- Add migration for existing users"Why this works:
- Validates approach early (iteration 1 = 15 mins)
- Course-correct before investing hours
- Each iteration is shippable
Anti-pattern we learned: Asking AI to build entire feature at once = debugging nightmare.
Pattern 3: The Database Query Optimizer
Problem: Slow query causing timeouts (real case from Client A).
What worked:
You: "This query times out after 3 minutes:
[paste query]
EXPLAIN output:
[paste EXPLAIN ANALYZE]
Help me optimize."
AI: [analyzes, suggests indexes and query rewrite]
You: "Show me the CREATE INDEX statements and the rewritten query side-by-side with original."
AI: [provides comparison]
You: "Create a benchmark to measure improvement"
AI: [generates benchmark script]Results:
- Before: 180+ seconds, frequent timeouts
- After: 5-20 seconds average
- Changes: 2 indexes, query rewrite, result caching
- Time invested: 45 minutes
Key insight: AI is excellent at query analysis when given EXPLAIN output. It spots N+1 patterns and missing indexes faster than manual review.
Pattern 4: The API Contract Generator
Problem: Need to build frontend before backend is ready.
Solution from Client C project:
Step 1: Define API contract with AI
"Design REST API for shipment tracking:
- POST /api/shipments (create)
- GET /api/shipments/:id (get details)
- PATCH /api/shipments/:id/status (update)
Include:
- Request/response schemas
- Error responses
- OpenAPI spec"
Step 2: Generate mock server
"Generate MSW mock server from this OpenAPI spec:
- Realistic mock data
- Simulate different scenarios (success, errors, delays)
- Support for shipment lifecycle"
Step 3: Generate real backend later
"Implement real NestJS controllers matching this OpenAPI spec:
- Use patterns from @src/orders
- Include validation
- Include tests"Benefits:
- Frontend and backend teams work in parallel
- Contract agreed upfront (fewer integration issues)
- Mock server becomes integration test fixture
Pattern 5: The Documentation Sync
Problem: Documentation is always stale (real issue at Client B).
Solution:
PR workflow rule:
"When touching @src/[module], also update:
1. Inline JSDoc comments for public APIs
2. README.md in module directory if structure changed
3. OpenAPI spec if endpoints changed
Use AI to generate, but review carefully."Concrete example:
You: "I just modified @src/auth/AuthService.ts
Generate:
1. Updated JSDoc for public methods
2. Update @src/auth/README.md to reflect new OAuth flow
3. Ensure @openapi/auth.yaml matches new endpoints"Result:
- Documentation stays current because it's part of PR checklist
- AI handles tedious work of formatting
- Human reviews for accuracy