Team Scale
Code Review for AI-Assisted Changes
Standards and checklist for reviewing AI-generated code to ensure quality, security, and maintainability
Standard We Use
Every PR with AI assistance includes:
## AI Assistance
**Tool:** Cursor Composer
**Scope:** ~70% AI-generated, 30% manual refinement
**What AI did:**
- Generated initial implementation of UserPreferenceService
- Created comprehensive unit tests
- Generated OpenAPI documentation
**What human did:**
- Refined error handling for edge cases
- Added business logic validation
- Optimized database query (AI suggestion was N+1)
- Updated integration tests
**Review focus:**
- [ ] Business logic correct?
- [ ] Security implications checked?
- [ ] Performance acceptable?
- [ ] Tests cover real scenarios?Why this works:
- Transparency builds trust
- Reviewers know where to focus attention
- Team learns what AI does well/poorly
- Creates institutional knowledge
PR Review Checklist for AI Code
Beyond normal code review, check:
Security - Client Data Protection
- PR doesn't include any .env files or secrets
- No hardcoded credentials (database, API keys, tokens)
- No production URLs or internal IPs
- No customer data or PII in test fixtures
- .cursorignore is configured (if first PR in project)
Security - Code Quality
- Input validation present (AI often skips)
- SQL injection safe (parameterized queries?)
- XSS prevention (proper escaping?)
- Authentication/authorization checked
- Error messages don't leak sensitive info
Performance
- No N+1 queries (AI's most common mistake)
- Proper indexing (AI suggests, you verify)
- Pagination for large datasets
- Caching where appropriate
Maintainability
- Can you explain this code to a teammate?
- Is it simpler than manual alternative?
- Does it follow project patterns?
- Will it be obvious in 6 months?
Tests
- Tests verify behavior, not implementation
- Edge cases covered (not just happy path)
- Mocks are necessary (AI over-mocks)
- Integration tests for critical paths