Developer Onboarding
Team Scale

Shared Prompt Library

How to create and maintain a team-wide library of best prompts for consistency and quality

What We Learned

Best prompts should be shared, not siloed. When developers discover effective prompts, capture them for the whole team.

Why Shared Prompts Matter

Before shared prompts:

  • Each developer prompts differently
  • Inconsistent code quality from AI
  • New team members reinvent the wheel
  • Best practices trapped in individual heads

After shared prompts:

  • Consistent AI output across team
  • New developers productive faster
  • Continuous improvement (iterate on prompts)
  • Institutional knowledge preserved

Organization Strategy

Folder Structure

.cursor/rules/prompts/ organized by task type:

prompts/
├── development/
│   ├── api-endpoint.md
│   ├── react-component.md
│   ├── database-migration.md
│   └── background-job.md
├── testing/
│   ├── unit-tests.md
│   ├── integration-tests.md
│   └── e2e-tests.md
├── debugging/
│   ├── performance-issue.md
│   ├── memory-leak.md
│   └── race-condition.md
├── refactoring/
│   ├── extract-service.md
│   ├── add-error-handling.md
│   └── typescript-migration.md
└── documentation/
    ├── api-docs.md
    ├── readme.md
    └── inline-comments.md

Naming Convention

Good names:

  • api-endpoint.md (clear what it does)
  • debug-n-plus-one.md (specific use case)
  • refactor-to-composition.md (clear transformation)

Bad names:

  • prompt1.md (meaningless)
  • good-one.md (too vague)
  • johns-thing.md (personal, not descriptive)

Prompt Template Examples

Development: API Endpoint

.cursor/rules/prompts/development/api-endpoint.md:

# API Endpoint Pattern

Generate REST endpoint for [resource]:

## Structure

- Controller: @src/[module]/[Resource]Controller.ts
- Service: @src/[module]/[Resource]Service.ts
- Repository: @src/[module]/[Resource]Repository.ts
- DTOs: @src/[module]/dto/
- Tests: All layers

## Follow Patterns

- Auth patterns from @src/auth/AuthController.ts
- Error handling from @src/common/errors/
- Validation using class-validator
- OpenAPI annotations

## Requirements

- CRUD operations (unless specified otherwise)
- Pagination for list endpoints
- Filtering/sorting
- Audit logging
- Rate limiting consideration

## Example Usage

"Create product catalog endpoint following @.cursor/rules/prompts/development/api-endpoint.md
Resource: Product
Fields: name, description, price, category, inStock"

Testing: Unit Tests

.cursor/rules/prompts/testing/unit-tests.md:

# Unit Test Pattern

Generate unit tests for [module/class]:

## Structure

- Colocate tests: @src/[module]/[File].test.ts
- Follow patterns from @src/[similar]/[Similar].test.ts
- Use AAA pattern (Arrange, Act, Assert)

## Requirements

- Test happy path first
- Test edge cases: null, undefined, empty, boundary values
- Test error conditions
- Mock external dependencies only
- Use descriptive test names: "should [expected behavior] when [condition]"

## Coverage Goals

- Aim for 80%+ line coverage
- 100% branch coverage for critical paths
- Don't test framework code
- Focus on business logic

## Example Usage

"Generate unit tests for @src/orders/OrderService.ts
following @.cursor/rules/prompts/testing/unit-tests.md
Focus on calculateTotal() method - test discounts, tax, edge cases"

Debugging: Performance Issue

.cursor/rules/prompts/debugging/performance-issue.md:

# Performance Issue Debugging

Debug performance problem in [feature/module]:

## Information to Provide

**Symptoms:**

- What is slow? (page load, API call, database query, etc.)
- How slow? (actual time vs expected time)
- When does it occur? (always, specific conditions, time of day)

**Context:**

- Relevant code: @[files]
- Profiler output: [paste results]
- Database query times: [paste EXPLAIN ANALYZE if DB-related]
- Network waterfall: [screenshot or data if relevant]

**Constraints:**

- Can't change database schema
- Must maintain backward compatibility
- [Other constraints]

## Expected Analysis

1. Identify bottleneck (with evidence)
2. Explain root cause
3. Suggest fix with trade-offs
4. Provide benchmark strategy

## Example Usage

"Debug performance issue following @.cursor/rules/prompts/debugging/performance-issue.md

Symptoms: Dashboard loads in 8 seconds, should be under 2s
Context: @src/dashboard/DashboardService.ts
Database query times: [paste EXPLAIN output]
Profiler: Shows 90% time in getUserStats() call"

Refactoring: Extract Service

.cursor/rules/prompts/refactoring/extract-service.md:

# Extract Service Refactoring

Extract business logic into dedicated service:

## Goals

- Separate concerns (controller → service → repository)
- Improve testability
- Enable reuse across features

## Process

1. Identify business logic in controller/component
2. Create service with clear responsibility
3. Move logic to service
4. Update controller to call service
5. Update tests (add service tests, simplify controller tests)
6. Verify behavior unchanged

## Requirements

- Follow patterns from @src/[existing-service]/
- Maintain all existing functionality
- Preserve error handling
- Update all tests
- No breaking changes to API

## Example Usage

"Extract order processing logic to service
following @.cursor/rules/prompts/refactoring/extract-service.md

From: @src/api/OrderController.ts lines 45-120
To: New @src/services/OrderProcessingService.ts
Keep controller thin - validation and routing only"

Documentation: API Documentation

.cursor/rules/prompts/documentation/api-docs.md:

# API Documentation Pattern

Generate API documentation for [endpoints]:

## Format

Use OpenAPI 3.0 specification

## Requirements

**For each endpoint:**

- Path and HTTP method
- Description (what it does)
- Request schema (parameters, body, headers)
- Response schemas (success + error cases)
- Authentication requirements
- Example requests/responses
- Rate limiting info

**Quality standards:**

- Descriptions in plain English
- Include edge cases in examples
- Document error codes
- Show pagination format

## Example Usage

"Generate API docs following @.cursor/rules/prompts/documentation/api-docs.md

For endpoints in: @src/api/products/
Output format: OpenAPI YAML
Include: Authentication, pagination, error responses"

Evolution Process

Prompts should improve over time based on team learnings.

Weekly Review Pattern

Every Friday or sprint end:

  1. Identify what didn't work:

    • AI misunderstood prompt
    • Output required heavy editing
    • Team members asking same questions
  2. Update prompt with learnings:

    ## Changes
    
    2024-12-05: Added requirement for error logging (AI kept forgetting)
    2024-11-20: Clarified pagination format (inconsistent before)
  3. Communicate updates:

    Team message:
    "Updated @prompts/api-endpoint.md to require error logging.
    AI was consistently missing this. Please use updated version."

Versioning Strategy

For major changes, version your prompts:

prompts/
├── api-endpoint.md (latest)
├── api-endpoint-v1.md (legacy projects)
└── CHANGELOG.md

CHANGELOG.md:

# Prompt Library Changes

## 2024-12-05

- **api-endpoint.md**: Added error logging requirement
- **unit-tests.md**: Increased coverage target to 80%

## 2024-11-20

- **api-endpoint.md**: Clarified pagination format
- **refactoring/extract-service.md**: New prompt added

Team Review Workflow

Adding New Prompts

Process:

  1. Developer discovers effective prompt
  2. Creates PR with new prompt file
  3. Team reviews prompt
  4. Once approved, merges to main
  5. Announced in team channel

PR Template:

## New Prompt: [Name]

**Problem it solves:**
[What pain point this addresses]

**When to use:**
[Specific scenarios]

**Example output:**
[Show what AI generates with this prompt]

**Testing:**

- [ ] Used on 2+ different scenarios
- [ ] Output required minimal editing
- [ ] Follows team patterns
- [ ] Documented in prompts/README.md

Updating Existing Prompts

When to update:

  • AI consistently misinterprets current version
  • Team standards change
  • New patterns emerge

Update PR includes:

  • What changed and why
  • Example of old vs new output
  • Affected projects (if any need updating)

Usage Patterns

In Chat Mode

"Create user preferences feature
following @.cursor/rules/prompts/development/api-endpoint.md"

In Composer Mode

"Build notification system with these components:
- API: Follow @.cursor/rules/prompts/development/api-endpoint.md
- Background jobs: Follow @.cursor/rules/prompts/development/background-job.md
- Tests: Follow @.cursor/rules/prompts/testing/integration-tests.md"

In Agent Mode

"Refactor @src/legacy/UserManager.ts
following @.cursor/rules/prompts/refactoring/extract-service.md

Extract business logic to services
Update all tests
Maintain backward compatibility"

Anti-Patterns

Anti-Pattern 1: Over-Specific Prompts

Bad:

# Create User Endpoint Specifically For Our Client Portal With Exact Field Names

[100 lines of hyper-specific details]

Problem: Not reusable, too brittle, becomes outdated quickly.

Better:

# API Endpoint Pattern

[General template with placeholders]
[References to project-specific patterns]

Anti-Pattern 2: Vague Prompts

Bad:

# Generate Good Code

Make sure the code is good quality and follows best practices.

Problem: Too generic, AI still has to guess.

Better:

# API Endpoint Pattern

[Specific structure requirements]
[Concrete examples]
[Clear success criteria]

Anti-Pattern 3: Hidden in Personal Folders

Bad:

~/my-prompts/awesome-thing.md
(Nobody else knows it exists)

Better:

project/.cursor/rules/prompts/awesome-thing.md
(In Git, everyone uses it)

Anti-Pattern 4: Never Updated

Bad:

Created: 2024-01-15
Last updated: 2024-01-15
[Stale 11 months later]

Better:

Created: 2024-01-15
Last updated: 2024-12-05
Updated every sprint based on learnings

Measuring Success

Track these metrics:

## Prompt Library Metrics (Monthly)

**Usage:**

- Prompts used in PRs: 45 PRs (up from 32 last month)
- Most used: api-endpoint.md (18 times)
- Least used: memory-leak.md (0 times - consider removing?)

**Quality:**

- AI output requiring <20% editing: 78% (target: 80%)
- Time saved vs manual: ~12 hours/week
- New developer onboarding: 1 week (was 3 weeks)

**Evolution:**

- Prompts added: 2
- Prompts updated: 5
- Prompts removed: 1 (outdated)

Quick Start for Your Team

Week 1: Foundation

  1. Create .cursor/rules/prompts/ folder
  2. Add 3 most common tasks as prompts
  3. Document in project README

Week 2-4: Adoption

  1. Use prompts in PRs, iterate based on results
  2. Add 2-3 more prompts from team discoveries
  3. Train team on how to reference prompts

Month 2+: Refinement

  1. Monthly review of prompt effectiveness
  2. Update based on learnings
  3. Remove prompts that aren't used

Remember: Start small (3-5 prompts), iterate based on actual usage. Prompts that aren't used should be updated or removed. The goal is a living library that evolves with your team.