Security & Privacy
Handling IP and secrets in an AI-native workflow.
Security is paramount. When we paste code into an LLM, we are sending data to a third party. We have strict protocols to manage this risk.
Data Classification
We classify data into three tiers:
-
Tier 1: Public / Generic (Safe for AI)
- UI Logic (Buttons, Forms).
- Generic utility functions.
- Standard architectural patterns.
- Action: Free to use with any model.
-
Tier 2: Business Logic (Sanitized)
- Proprietary algorithms.
- Specific business rules.
- Action: Use with Privacy Mode Enabled (Zero-Data Retention) models only.
-
Tier 3: PII & Secrets (FORBIDDEN)
- API Keys, Passwords, DB Connection Strings.
- Customer Names, Emails, Addresses.
- Action: NEVER paste this into an LLM.
The "Zero-Trust" Configuration
Privacy Mode
We configure Cursor and our MCP servers to use Zero-Data Retention policies where available. This means the model providers (Anthropic, OpenAI) do not train on our data.
Secret Scanning
We use pre-commit hooks (like git-secrets or similar tools) to ensure that no secrets are accidentally committed or pasted into prompts.
IP Protection
The code generated by the AI belongs to the client. Our contracts clarify that while we use AI tools, the output is a work-for-hire product owned by the customer.
Educating the AI
We use .cursorrules to warn the AI itself:
"Do not request or generate hardcoded secrets. Use environment variables (process.env) for all sensitive values."
By treating the AI as a potentially leaky bucket, we design our workflows to ensure no water ever spills.