MCP Server Integrations
When to use MCP servers to connect AI to external tools - practical patterns and security guardrails
What Is MCP?
Model Context Protocol (MCP) lets Cursor connect to external data sources and tools: databases, design tools, project management systems, browsers. AI can query these systems directly instead of you copy-pasting data.
For MCP setup instructions, see Cursor Docs. This playbook covers when and how to use MCP effectively.
The Core Question: Do You Actually Need MCP?
Most teams don't need MCP servers.
Cursor already handles:
- File access (native)
- Terminal commands (native)
- Codebase search (native)
- Web browsing (built-in browser tools)
Use MCP only when:
- You frequently need context from external systems
- Copy-pasting that context is tedious
- The data changes often (not static docs)
- Security is manageable (see guardrails below)
When MCP Shines: Concrete Scenarios
MCP delivers ROI fast in these situations:
Scenario 1: Design-to-Code Workflow
Setup time: 30 minutes (Figma MCP)
Payback time: 2-3 hours of work
Before MCP:
- Switch to Figma 20+ times per day
- Manually inspect spacing, colors, fonts
- Copy values, switch back, paste
- Miss subtle design updates
- Time lost: 15-20 min/day per developer
With MCP:
"Implement @figma/Dashboard/UserCard component
Match exact spacing, colors, typography from design"
AI reads Figma directly, gets pixel-perfect valuesSavings: 15-20 min/day × 5 developers = 75-100 min/day
Scenario 2: Database Schema Exploration
Setup time: 15 minutes (Database MCP, read-only)
Payback time: First debugging session
Before MCP:
You: Need to understand relationships between tables
[Open database tool]
[Run DESCRIBE queries]
[Copy table schemas]
[Switch back to Cursor]
[Paste into chat]
[Repeat for related tables]With MCP:
"Show me all tables that reference the users table
Explain the relationship structure"
AI queries database directly, sees foreign keys, constraintsEspecially valuable for:
- New developers onboarding
- Legacy databases with poor documentation
- Complex many-to-many relationships
- Planning migrations
Scenario 3: Issue-to-Implementation Flow
Setup time: 20 minutes (GitHub/Linear MCP)
Payback time: 5-10 issues
Before MCP:
[Open Linear]
[Read issue #234]
[Read acceptance criteria]
[Check comments for clarifications]
[Copy requirements]
[Switch to Cursor]
[Paste requirements]
[Start coding]
[Realize you missed a comment]
[Switch back to Linear]
[Repeat]With MCP:
"Implement @linear/VEL-234"
AI reads:
- Issue description
- Acceptance criteria
- Comments and clarifications
- Linked issues/dependencies
AI suggests approach based on complete contextROI calculation:
- Average 3 min saved per issue
- 10 issues/day across team
- 30 min/day saved
- 2.5 hours/week
When Setup Time ISN'T Worth It
Don't use MCP if:
- Feature request happens once (one-time data fetch)
- Documentation is static and complete (just document it)
- Team has fewer than 3 developers (ROI too small)
- Data is highly sensitive (security risk > productivity gain)
- Copy-paste takes < 30 seconds (manual is fine)
Rule of thumb: If you'll reference the external system 10+ times per week, MCP is worth it.
Recommended MCP Servers
Frontend Development
Figma MCP
Why it's useful:
Eliminates the designer-developer translation gap.
Before MCP:
You: "What's the button padding in the design?"
[Switch to Figma]
[Inspect element]
[Copy values]
[Switch back]
You: "It's 12px vertical, 24px horizontal... wait, which breakpoint?"With Figma MCP:
You: "Match the button styling from @figma/DesignSystem/Button"
AI: [Reads Figma directly, gets exact values]Best use cases:
- Implementing designs with pixel-perfect accuracy
- Extracting color palettes and spacing scales
- Verifying implementation matches design specs
When to skip: If designs are static or rarely change, just document the design tokens once.
Browser/Playwright MCP
Why it's useful:
Visual debugging and testing rendered pages.
Use cases:
- Take screenshots of running app for visual comparison
- Debug CSS/layout issues by having AI see the rendered page
- Interact with browser for testing flows
Example:
You: "Take a screenshot of @localhost:3000/dashboard
and compare to @figma/Dashboard design"
AI: [Captures screenshot, compares, identifies differences]When to skip: For backend-only projects or simple CLI tools.
Backend Development
Database MCP (Postgres, MySQL)
Why it's useful:
AI can query schema, explore data, and understand your database structure without you pasting DESCRIBE TABLE outputs.
Use cases:
- "Show me all tables that reference users"
- "What indexes exist on the orders table?"
- "Generate migration to add created_at to all tables missing it"
- Debug data issues: "Why is this query slow?"
Security critical: Never point MCP at production database. Read-only dev/staging only.
Real Case Study: Debugging Slow Report Query
Problem: Monthly sales report timing out (>30 seconds) - Client A project.
Traditional debugging (60-90 min):
1. Open database client (pgAdmin)
2. Run EXPLAIN ANALYZE manually
3. Copy results to text file
4. Research what "Seq Scan" means
5. Identify missing indexes
6. Check if columns are indexed
7. Draft CREATE INDEX statements
8. Test in dev database
9. Measure improvement
10. Document findingsWith Database MCP (20 min):
Step 1: Ask AI to analyze query
You: "This query is slow @src/reports/SalesReport.ts line 23
@database analyze the query execution plan
What's causing the slowness?"
AI: [Queries database via MCP]
"Running EXPLAIN ANALYZE...
Issues found:
1. Sequential scan on orders table (2.1M rows)
2. Missing index on orders.created_at
3. Join with products doing nested loop (expensive)
4. No index on orders.product_id
Estimated impact:
- Add index on (created_at, product_id): 80% improvement
- Consider composite index for this query pattern"
Step 2: Generate optimal indexes
You: "Generate the CREATE INDEX statements
Show me before/after explain plans"
AI: [Queries database, generates DDL]
"Recommended indexes:
CREATE INDEX idx_orders_created_at_product
ON orders(created_at, product_id);
CREATE INDEX idx_orders_user_status
ON orders(user_id, status)
WHERE status != 'cancelled';
Before: Seq Scan cost=48234
After (estimated): Index Scan cost=892
Expected improvement: 30s → 0.5s"
Step 3: Verify and test
You: "Check if any similar indexes already exist"
AI: [Queries database]
"Existing indexes on orders:
- idx_orders_user_id (user_id)
- idx_orders_created (created_at) ← This exists!
But query needs composite index for optimal performance"
Step 4: Generate migration
You: "Generate migration file following @migrations/2024-10-15.sql"
AI: [Creates migration]Result:
- Time: 20 minutes vs 60-90 minutes
- Query time: 30s → 0.4s (75x improvement)
- Indexes created: 1 composite, 1 partial
- Documentation: AI generated explanation in migration
Key advantage: AI saw the actual database state (existing indexes, table sizes, constraints) and tailored recommendations accordingly.
Workflow: Schema Exploration for New Feature
You: "I need to add a 'favorites' feature for products
@database show me:
1. Current products table structure
2. How user-product relationships are modeled
3. Suggest schema for favorites"
AI: [Queries database via MCP]
"Current products table:
- id (primary key)
- name, description, price
- category_id (foreign key)
- created_at, updated_at
User-product relationships:
- purchases: orders table (users -> orders -> order_items -> products)
- reviews: product_reviews table (many-to-many via reviews)
- cart: cart_items table (temporary relationship)
Recommended favorites schema:
CREATE TABLE user_favorites (
user_id INTEGER REFERENCES users(id),
product_id INTEGER REFERENCES products(id),
created_at TIMESTAMP DEFAULT NOW(),
PRIMARY KEY (user_id, product_id)
);
CREATE INDEX idx_user_favorites_user ON user_favorites(user_id);
CREATE INDEX idx_user_favorites_product ON user_favorites(product_id);
This matches your existing pattern for user-product relationships."
You: "Generate the migration and repository code"
AI: [Creates migration + TypeORM repository matching existing patterns]Time saved: 30 minutes of schema research and pattern matching.
GitHub MCP
Why it's useful:
Pull in issue/PR context without leaving Cursor.
Use cases:
- "Implement feature described in issue #234"
- "Review the PR discussion for #189 before making changes"
- "What's the status of the auth refactor branch?"
When to skip: If you're not actively using GitHub issues/projects for planning.
Linear/Jira MCP
Why it's useful:
Same as GitHub MCP - pull requirements directly into context.
Example:
You: "Implement @linear/VEL-123"
AI: [Reads issue details, acceptance criteria, comments]
AI: "This issue requires adding rate limiting to /api/auth/login..."When to skip: If tickets are just titles without detailed specs.
Real Workflow: Feature Implementation from Issue to PR
Scenario: Implement user notification preferences feature - Client B project.
Without MCP (45-60 min overhead):
1. Open Linear, read issue VEL-456
2. Copy requirements to notes
3. Check comments for clarifications
4. Find related issues mentioned
5. Open those, read context
6. Switch to Cursor
7. Start implementing
8. Realize acceptance criteria unclear
9. Switch back to Linear
10. Read comments again
11. Ask PM for clarification in comment
12. Wait for response
13. Continue implementationWith Linear MCP (5 min overhead):
Step 1: Pull full context
You: "Show me @linear/VEL-456 with full details
Include acceptance criteria, comments, and linked issues"
AI: [Reads via MCP]
"Issue VEL-456: User Notification Preferences
Description:
Users need granular control over notification types.
Currently all-or-nothing (all notifications or none).
Acceptance Criteria:
✓ User can toggle email notifications per category
✓ Categories: Orders, Promotions, Account, System
✓ Default: All enabled for new users
✓ Settings page in user profile
✓ Immediate effect (no delay)
Comments (3):
- PM: 'Should apply to existing users with all enabled'
- Designer: 'Figma design ready: [link]'
- Backend Lead: 'Use JSONB column for flexibility'
Linked Issues:
- VEL-455: Email service refactor (dependency - merged)
- VEL-457: Push notifications (future - blocked by this)
Status: Ready for Development
Assigned: You
Sprint: Current"
Step 2: Clarify ambiguities
You: "Based on this issue, clarify:
1. Should we support push notifications now or just email?
2. What happens to existing notification preferences?"
AI: [Analyzes issue context]
"Based on comments and linked issues:
1. Email only for now. VEL-457 will add push later.
2. Existing users: Default all categories enabled (per PM comment)
Migration needed to set defaults."
Step 3: Plan implementation
You: "Create implementation plan
Check @src/users/ for existing preference patterns"
AI: [Combines Linear context + codebase]
"Implementation Plan:
1. Database:
- Add preferences JSONB column to users table
- Migration with default: {email: {orders: true, ...}}
2. Backend:
- PreferenceService (CRUD for preferences)
- Update NotificationService to check preferences
- API endpoints: GET/PUT /api/users/me/preferences
3. Frontend:
- Settings page component
- Toggle UI for each category
- Immediate save on toggle
4. Tests:
- Unit: PreferenceService
- Integration: API endpoints
- E2E: Toggle preferences, verify emails
Estimated: 6-8 hours
Matches existing pattern in @src/users/UserSettings.ts"
Step 4: Implement with context
You: "Implement step 1 (database)
Follow patterns from @migrations/
Use JSONB as suggested in issue"
AI: [Generates migration with proper defaults]
Step 5: Update issue as you progress
You: "Add comment to @linear/VEL-456:
'Database migration complete. Starting backend implementation.'"
AI: [Posts comment via MCP]
"Comment added to VEL-456. Team can see progress."Benefits:
- No context switching: All info in Cursor
- Complete context: AI sees comments, links, acceptance criteria
- Clarifications: AI helps interpret ambiguous requirements
- Progress tracking: Update issue without leaving Cursor
- Time saved: 45 min → 5 min per feature
Real metrics from Client B:
- 12 features implemented with Linear MCP
- Average 40 min saved per feature (context switching)
- 8 hours saved over 2 weeks
- Fewer "I missed a requirement" bugs
Decision Framework: Is MCP Worth It for Your Team?
Use this framework to decide whether to invest in MCP setup:
High-Value MCP Use Cases
| Scenario | MCP Worth It? | Setup Time | Payback Time | Why |
|---|---|---|---|---|
| Implementing from Figma daily | ✅ Yes | 30 min | 2-3 hours | High frequency, eliminates context switching, pixel-perfect accuracy |
| Active sprint with Linear/Jira | ✅ Yes | 20 min | 5-10 issues | Pulls complete context, reduces misunderstood requirements |
| Database-heavy development | ✅ Yes | 15 min | First debug session | Schema exploration, query optimization, migration planning |
| Visual debugging frontend | ✅ Yes | 25 min | First layout bug | AI sees rendered output, spots CSS issues faster |
| API integration work | ✅ Maybe | 30 min | Depends on frequency | Useful if constantly checking external APIs |
Low-Value MCP Use Cases
| Scenario | MCP Worth It? | Why Skip It |
|---|---|---|
| One-time DB migration | ❌ No | Just query manually once, paste schema into chat |
| GitHub with sparse issues | ❌ No | If issues are just titles, not enough context to justify setup |
| Backend API with good docs | ❌ No | Cursor's native tools + docs sufficient |
| Static design system | ❌ No | Document design tokens once, reference with @docs |
| Small team (fewer than 3 devs) | ⚠️ Maybe | ROI is lower, but can still be valuable for individual productivity |
The "10x Rule" for MCP
Set up MCP if:
- You'll reference the external system 10+ times per week
- Each reference saves 2+ minutes of context switching
- The data changes frequently (not static)
Example calculation:
Figma MCP:
- 15 design references per day
- 3 minutes saved per reference
- 45 min/day saved
- 3.75 hours/week saved
- Setup time: 30 minutes
ROI: Pays back in 1 day ✅Skip MCP if:
- One-time or rare usage
- Data is static (document it instead)
- Security concerns outweigh productivity gains
Security Guardrails
MCP servers can access sensitive data. Lock them down.
Database MCP Security
Hard rules:
## Database MCP: Non-negotiable
- [ ] Read-only user only (GRANT SELECT, no INSERT/UPDATE/DELETE)
- [ ] Development or staging database only (NEVER production)
- [ ] Connection via .env that's in .cursorignore
- [ ] No tables with PII/sensitive data (or mask them)
- [ ] Review what AI queries (can expose data in logs)Example read-only user:
-- Create read-only user for MCP
CREATE USER cursor_mcp WITH PASSWORD 'xxx';
GRANT CONNECT ON DATABASE myapp_development TO cursor_mcp;
GRANT USAGE ON SCHEMA public TO cursor_mcp;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO cursor_mcp;
-- Verify (should show only SELECT)
SELECT * FROM information_schema.role_table_grants
WHERE grantee = 'cursor_mcp';Figma MCP Security
What gets exposed:
- Your Figma files (designs, component specs)
- File names, page names, layer structure
- Comments and descriptions
Not exposed:
- Other team's private files (unless shared)
Risk: Low, unless designs contain confidential product plans.
GitHub/Linear MCP Security
What gets exposed:
- Issue titles, descriptions, comments
- PR discussions, code review comments
- Project board structure
Risk: Medium if repos contain client names, security discussions, or internal debates.
Mitigation:
- Use MCP only on projects you own
- Review what AI queries (visible in logs)
- Don't use on client repos with sensitive info
Common MCP Anti-Patterns
Anti-Pattern 1: Over-Configuring
Mistake:
Installing 10 MCP servers "just in case."
Problem:
- Clutters context
- Slows down AI responses
- Security surface area grows
- Maintenance burden
Better:
Start with zero MCP servers. Add one only when you feel genuine pain.
Anti-Pattern 2: Using MCP for Static Data
Mistake:
Setting up Database MCP to read schema that hasn't changed in 6 monthsBetter:
# docs/database-schema.md
Just document the schema once, reference with @docsRule: If data changes less than weekly, document it instead of MCP.
Anti-Pattern 3: MCP as Primary Workflow
Mistake:
Routing all AI interactions through MCP servers.
Problem:
- Slower than native Cursor features
- More complexity
- Harder to debug
Better:
MCP is for context enrichment, not primary workflow.
Good: "Here's my code @src/feature.ts and design @figma/Feature"
Bad: "@database get schema, @github get issue, @figma get design..."Anti-Pattern 4: Production Database MCP
Never do this:
❌ Connecting MCP to production database "for debugging"Why this is catastrophic:
- AI could accidentally suggest destructive queries
- Cursor chat logs contain production data
- Security audit nightmare
- GDPR/compliance violation if PII exposed
Always:
- Development or staging database only
- Read-only access
- No PII tables
When to Remove MCP
Signs MCP isn't adding value:
- You installed it but never use it
- You forget to update credentials and don't notice
- Faster to manually copy-paste the data
- MCP queries slow down AI responses
- Team isn't using it consistently
It's okay to uninstall. Less complexity is better than "nice to have" tools.
MCP Setup Checklist
If you decide MCP is worth it:
## Before Installing MCP Server
- [ ] Clear use case (what pain does this solve?)
- [ ] Team will use it regularly (not just you)
- [ ] Security reviewed (what data gets exposed?)
- [ ] Credentials in .env and .cursorignore
- [ ] Read-only access where applicable
- [ ] Documented in project README
- [ ] Team trained on when to use it
## After Installing
- [ ] Test with example query
- [ ] Verify AI can access data
- [ ] Confirm security constraints work
- [ ] Document common patterns for team
- [ ] Review after 2 weeks: still providing value?Bottom line: MCP is powerful but often overkill. Start without it. Add only when the pain of not having it is clear.