AI Agents Need Governance Context
AI coding assistants, autonomous agents, and LLM-powered workflows are making decisions at a pace that governance programs were never designed to handle. An agent deploying infrastructure, writing data pipelines, or triaging customer data makes dozens of implicit compliance decisions per session. Which data can be stored where? What retention rules apply? Who needs to approve access changes?
The governance knowledge that should inform these decisions exists. It lives in your policy library -- hundreds of carefully authored statements covering data handling, access control, incident response, and operational boundaries. The problem is access. Those statements sit in a system designed for human readers: web dashboards, PDF exports, review workflows. Nothing exposes them in a format that an agent can query programmatically, in real time, as part of its decision-making loop.
This is the gap we set out to close.
What We Built
The Dictiva MCP governance server is a Model Context Protocol implementation that exposes your governance program to AI agents. MCP is an open standard originated by Anthropic for connecting AI assistants to external data sources. It defines a structured protocol for tools (actions agents can take), resources (data agents can read), and prompts (reusable task templates).
Our server exposes:
Six tools for querying and compiling governance data:
- search_statements -- Full-text and faceted search across your statement library. Filter by domain, enforcement mode, actor applicability, maturity level.
- get_assembly_bundle -- Compile a published assembly (policy, standard, or procedure) into a machine-readable bundle with a signed manifest. The bundle contains every statement in the assembly with full metadata.
- search_glossary -- Query glossary terms with ontology metadata: machine keys, aliases, term types, hierarchical relationships, and linked statements.
- list_applicable_policies_for_actor -- Given an actor type (e.g., "data engineer"), return every governance statement that applies to that role.
- export_ontology -- Export your full glossary as a structured ontology graph in JSON or JSON-LD. Feed it into an agent's context to ground its terminology in your organization's definitions.
- get_statement_guidance -- Retrieve agent-specific guidance for a single statement: what actions are required, who approves, how to escalate, and which controls map to it.
One resource -- governance-summary provides a high-level snapshot of your governance posture: statement counts by domain, enforcement distribution, glossary coverage, and assembly statistics.
Two prompts -- compliance-check evaluates a planned action against applicable policies and returns a structured pass/fail assessment. policy-summary generates a domain-level governance overview with gap analysis.
How It Works
The architecture is straightforward. The MCP server runs as a local process (spawned by your MCP client via npx) and communicates over stdio. When an agent invokes a tool -- say, search_statements with a query for "data retention" -- the server translates that into an authenticated HTTP request against the Dictiva agent API (/agent/statements/search).
Agent (Claude, Cursor, etc.)
|
| stdio (MCP protocol)
v
Dictiva MCP Server (local process)
|
| HTTPS + Bearer token
v
Dictiva Agent API (app.dictiva.com/agent/*)
|
| Tenant-scoped query
v
PostgreSQL + Typesense
Every request carries the API key as a Bearer token. The Dictiva API enforces tenant isolation (the key is scoped to one workspace), RBAC permission checks (the key must have the correct scopes), and rate limiting (per-key, fixed-window). The agent never touches the database directly. It gets the same data a human user would see, subject to the same access controls.
The server itself is stateless. It holds no credentials beyond the API key in the environment variable, maintains no session, and caches nothing. Each tool call is an independent HTTP round-trip.
Setting Up in 2 Minutes
1. Create an API Key
Navigate to Settings > API Keys in Dictiva. Create a key with these scopes:
statement:read-- Required for statement search, actor policies, and guidanceglossary:read-- Required for glossary search and ontology exportassembly:read-- Required for bundle compilation
Copy the key. It starts with dv_live_ and is only shown once.
2. Configure Your MCP Client
For Claude Code, add this to your .mcp.json:
{
"mcpServers": {
"dictiva-governance": {
"command": "npx",
"args": ["tsx", "webapp/scripts/mcp-governance-server.ts"],
"env": {
"DICTIVA_API_KEY": "dv_live_...",
"DICTIVA_BASE_URL": "https://app.dictiva.com"
}
}
}
}
3. Verify
Ask your agent: "Search for governance statements about access control." If the MCP server is configured correctly, the agent will invoke search_statements and return results from your governance library.
The setup works with any MCP-compatible client -- Claude Code, Cursor, Windsurf, or any tool that implements the MCP client specification.
Use Cases
Compliance Checking Before Deployment
An agent preparing a deployment that involves storing personal data in a new region can invoke compliance-check with a description of the planned action. The prompt retrieves applicable data residency, data privacy, and cross-border transfer statements, evaluates the action against each one, and returns a structured assessment. The agent knows before deployment whether the action complies -- or what approvals it needs first.
Policy-Aware Code Review
During code review, an agent can call list_applicable_policies_for_actor with the role "software engineer" and search_statements filtered to the relevant domain. If a pull request introduces a new data pipeline, the agent checks whether data classification, retention, and lineage requirements are satisfied. The review comment cites specific governance statement codes, not vague policy references.
Ontology-Grounded Agent Context
Governance programs define terms precisely. "Personal data," "data controller," "critical system" -- these have specific organizational meanings that differ from casual usage. The export_ontology tool lets agents load your glossary as structured context. When an agent encounters "sensitive data" in a conversation, it resolves the term against your glossary definition rather than guessing.
Machine-Readable Policy Bundles
The get_assembly_bundle tool compiles a published assembly into a self-contained JSON bundle with a signed manifest. This is useful for CI/CD gates, automated audits, and offline policy evaluation. The bundle contains every statement in the assembly with its full metadata -- enforcement mode, maturity level, domain, actor applicability -- in a format machines can parse without ambiguity.
Security Model
Governance data is sensitive. The MCP server enforces multiple layers of access control:
Plan gating. MCP agent access requires a Business or Enterprise plan. Community and Professional plans cannot use the agent API endpoints. This is enforced server-side -- there is no client-side bypass.
RBAC scopes. Each API key has explicit permission scopes. A key with statement:read cannot access glossary endpoints. A key without assembly:read cannot compile bundles. Scopes are checked on every request.
Rate limiting. Each API key has a per-minute request limit: 100 for Business, 500 for Enterprise. The server returns standard X-RateLimit-* headers and 429 Too Many Requests when the limit is exceeded. MCP and REST API requests share the same rate limit budget.
Separate metering. MCP requests are metered independently from browser API requests. The usage dashboard at Settings > Billing > API & MCP Usage shows MCP volume separately, so you can track agent consumption.
Audit trail. Every tool call is recorded as a governance event with actorType: "mcp_tool" and a source identifier (e.g., mcp/search_statements). These events appear in the tenant activity log alongside human actions. The audit trail is fire-and-forget -- it never blocks the tool call response.
Tenant isolation. The API key is scoped to a single tenant. There is no mechanism -- in the MCP server or the underlying API -- to access another tenant's data.
What Comes Next
This launch covers the stdio transport, which requires the MCP server to run as a local process. We are working on:
- Remote HTTP transport. Run the MCP server as a hosted service that agents connect to over HTTPS. This eliminates the local process requirement and enables server-side MCP clients to connect without
npx. - Additional tools. Regulations search, control framework queries, attestation status checks, and exception lookup. Every governance entity that has an API will get an MCP tool.
- Governance event streaming. Subscribe to real-time governance events (statement changes, policy publications, compliance status updates) via MCP resource subscriptions. Agents can react to governance changes as they happen instead of polling.
- Multi-tenant agent orchestration. For Enterprise customers managing multiple subsidiaries, a parent-tenant MCP configuration that routes tool calls to the correct child tenant based on context.
Try It Today
The MCP governance server is available now for all Business and Enterprise customers. No additional setup is required beyond creating an API key and adding the MCP configuration to your client.
Read the full setup guide for detailed configuration, tool reference, and troubleshooting.
If you are not yet on a plan that includes MCP access, explore our pricing or start a free account to see the governance library that powers these tools.