Context Integrations (MCP and Beyond)
Connecting code, docs, design, and data to the AI.
An AI-augmented workflow only works if the model can see the right context. When it cannot see schemas, docs, or designs, it hallucinates.
The Velocity Framework assumes the use of a context integration layer (for example, based on the Model Context Protocol, MCP) that connects the IDE to external systems.
What the Integration Layer Does
Regardless of vendor, the integration layer should:
- Expose code and repos beyond the current workspace (e.g., shared libraries, infrastructure repos).
- Expose schemas and APIs (databases, HTTP services, event contracts) in a safe, read-focused way.
- Expose documentation (internal docs, runbooks, ADRs) so AI can answer "how does X work here?" with project-specific knowledge.
- Expose work tracking (issues, tickets, PRs) to support planning and reporting flows.
This reduces copy-paste and keeps prompts short, while still giving the model enough context to be useful.
Typical Integration Categories
A healthy Velocity pod will usually have integrations for:
- Version control: search issues, PRs, and related repos.
- Databases: see schemas and constraints, but not raw production data.
- Design systems: reference components, tokens, and layout specs.
- Documentation: fetch relevant pages from internal docs portals or wikis.
Exact tools and servers are implementation details. Today, we often use MCP-compatible servers in Cursor to achieve this, but the framework treats MCP as one way to realize the pattern, not the only way.
Safety and Governance
Any integration must respect the security and privacy rules defined in the Governance section:
- No direct access to secrets or sensitive production data.
- Clear separation between metadata/schemas and real user data.
- Configurable to match client-specific policies (e.g., self-hosted endpoints, restricted scopes).
The goal is simple: give AI enough context to be useful, while ensuring nothing leaves the boundaries it shouldn’t.