Security First - Protecting Client Data
Critical security configuration before using AI - setting up .cursorignore and protecting sensitive data
The Problem
When you use Cursor with @codebase or Composer, AI can read your project files. If you're not careful, you'll accidentally share:
- Client API keys
- Database credentials
- Private certificates
- Customer data
- Internal infrastructure details
This is a breach of trust and potentially illegal (GDPR, etc.).
The Solution: Configure .cursorignore
Create .cursorignore in project root (FIRST THING, before any AI work):
# Environment files
.env
.env.*
*.env
.env.local
.env.production
.env.staging
# Credentials & Secrets
**/secrets/
**/credentials/
*.key
*.pem
*.p12
*.pfx
*.cert
*.crt
config/secrets.yml
config/credentials.yml.enc
# Database dumps
*.sql
*.dump
*.backup
dumps/
# Logs (may contain sensitive data)
*.log
logs/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Cloud provider configs
.aws/
.azure/
.gcloud/
terraform.tfvars
terraform.tfstate
*.tfstate
# IDE & Editor configs that might contain paths
.vscode/settings.json
.idea/
# Test data that might be real
test-data/real/
fixtures/production/
# Any file with "secret", "password", "token" in name
*secret*
*password*
*token*
*auth*key*
# Client-specific (add your own)
client-data/
prod-backup/Verify It Works
Test before using AI:
# 1. Create .cursorignore with above content
# 2. Test it
echo "SECRET_KEY=test123" > .env
# 3. Ask Cursor
"Search @codebase for the word SECRET_KEY"
# 4. AI should say: "I cannot find SECRET_KEY in the codebase"
# If AI finds it → your .cursorignore is not workingGit Ignore vs Cursor Ignore
Important: They serve different purposes!
.gitignore → Prevents committing to repository
.cursorignore → Prevents AI from reading files
Always have both:
# .gitignore (prevent commits)
.env
*.log
node_modules/
# .cursorignore (prevent AI access)
.env
*.log
# Note: node_modules is OK for AI to read (it's public code)Team Practice
In every new project:
## Security Checklist (before AI work)
- [ ] Created .cursorignore
- [ ] Tested with dummy .env file
- [ ] Verified AI can't see secrets
- [ ] Added client-specific paths
- [ ] Reviewed with team lead
- [ ] Documented in project READMERed Flags
Stop immediately and check .cursorignore if:
- AI suggests actual API keys in responses
- AI mentions real database connection strings
- AI references production URLs you didn't share
- AI knows customer names from data files
For Limestone Digital Team
Our standard .cursorignore template is in:
/templates/cursorignore-template
Copy it to every new client project:
cp /templates/cursorignore-template .cursorignore
# Then add client-specific pathsCode review requirement:
Every project must have .cursorignore before first AI-assisted PR.
What If I Already Shared Secrets?
If you accidentally shared secrets with AI (via chat or @codebase):
Immediate actions:
- Assume compromised - AI providers store conversation history
- Rotate all secrets immediately:
# Database passwords # API keys # JWT secrets # OAuth client secrets # Any credentials mentioned - Notify client - Be transparent, explain what happened
- Document incident - What was exposed, what was rotated
- Update .cursorignore - Prevent it happening again
Do NOT:
- Hope nobody noticed
- Delete chat history and pretend it didn't happen
- Wait to see if something bad happens
Example incident response:
## Security Incident Report
**Date:** 2024-12-04
**Severity:** High
**Status:** Resolved
**What happened:**
Developer used `@codebase` before configuring .cursorignore.
File `.env.production` was accessible to AI.
**What was exposed:**
- Database connection string
- AWS S3 access keys
- SendGrid API key
**Actions taken:**
1. Rotated database password (10 minutes)
2. Rotated AWS keys (15 minutes)
3. Rotated SendGrid key (5 minutes)
4. Configured .cursorignore
5. Notified client
6. Updated team training
**Prevention:**
- Added .cursorignore to project template
- Added security checklist to onboarding
- Required .cursorignore in PR reviewsCursor AI Data Policy
What Anthropic (Claude) stores:
- Conversation history (for improving model)
- Code snippets you share in chat
- Files you reference with @ mentions
What they DON'T store:
- Files blocked by .cursorignore
- Your entire codebase (only what you explicitly share)
Best practice: Assume anything shared with AI could be seen by:
- Anthropic employees (for model training)
- Potential data breaches
- Subpoenas / legal requests
Golden rule: Never share what you wouldn't put in a public GitHub repo.