AI Agent Governance

Governance your agents can follow, at the statement level

Most platforms bolt observability onto agent outputs. Dictiva makes policy itself machine-readable, agent-enforceable, and audit-ready — from the schema up.

Free forever on Community · MCP agent access on Business +

Your agents are making decisions. Can you prove they're following policy?

Most teams ship AI agents with guardrails scattered across prompts, allow-lists, regex filters, and post-hoc rule checks. When auditors ask “which policy did this agent follow when it took that action?” the answer is rarely satisfying.

Policy lives in PDFs, SharePoint folders, and Notion pages. Guardrails live in code. Approval flows live in Jira or nowhere. Audit trails live in a dozen LLM provider dashboards. There is no single source of policy truth — and certainly not one your agents can read.

Policy-as-text isn't enough. You need policy-as-data.

Six capabilities that make policy executable

Each one is shipped today, backed by a specific product surface. What isn't shipped yet has its own section below — no hidden gaps.

Statement-first machine-readable policy

Every governance statement carries modality, enforcement mode, and actor applicability — human, agent, or both. Policies become executable by construction, not interpretive guidance.

Shipped: enforcementMode + actorApplicability columns on statements table

Agent guidance on every policy

Structured agent metadata per statement: allowedActions, prohibitedActions, requiredContext, requiredApprovals, evidenceRequirements, escalationRules, failureMode — all queryable.

Shipped: agent_guidance JSONB column across the statement schema

Live MCP governance server

External AI agents (Claude Code, Cursor, ChatGPT, Claude web) connect to your policy corpus via the Model Context Protocol. Six tools, dual transport, read/write auth split.

Shipped: /api/mcp (HTTP) + stdio transport, 6 tools live

Full agent audit trail

Every AI and agent interaction logged with prompts, responses, tool calls, token spend, latency, errors. Exportable on Scale+. Compliance-ready from day one.

Shipped: ai_audit_logs table, 25 columns, 4 indexes

AI Context Layer

Six grounding modules (statements, regulations, tenant, user, domain, glossary) anchor agent reasoning in your actual policy corpus — not a generic pretrained prior.

Shipped: 6 context providers under lib/ai/context/

Library + adoption model

57 regulations, 600+ pre-extracted requirements, 1,300+ typed glossary terms. Agents reference a curated library; divergence from adopted statements is tracked automatically.

Shipped: adoption + divergence tracking on library statements

Roadmap — not claimed as shipped

What we're honest about deferring

These capabilities have the underlying schema or primitives already in the product — the user-facing surface is in flight. We'd rather tell you when it ships than pretend it already has.

Agent SOP runtime execution

Q3 2026

Procedures already support steps, conditions, RACI, subprocesses, and evidence. The executor that runs them (human-driven or agent-driven) with step-level state machines is next.

Agent approval workflow UI

Q3 2026

Statements already declare requiredApprovals. The UI that lets agents request approvals and lets humans grant them in-flow is next.

Audit-ready evidence dashboard

Q4 2026

Every primitive (audit log, attestations, evidence links) already exists. The compliance-facing dashboard that aggregates them into an evidence pack is next.

Connect any MCP-capable agent

Six governance tools exposed over stdio and Streamable HTTP. Bearer auth with scoped API keys. Works with Claude Code, Cursor, ChatGPT, Claude web, and any custom client that speaks MCP.

Claude Code (stdio)

Local dev loop — AI can query your tenant's policies directly.

{
  "dictiva": {
    "command": "npx",
    "args": ["@dictiva/mcp-server"],
    "env": { "DICTIVA_API_KEY": "dv_live_..." }
  }
}

Cursor / Remote Claude Code (HTTP)

Remote MCP — same tools, accessed over Streamable HTTP.

{
  "mcpServers": {
    "dictiva": {
      "url": "https://app.dictiva.com/api/mcp",
      "headers": { "Authorization": "Bearer dv_live_..." }
    }
  }
}

ChatGPT / Claude web (Remote MCP)

Paste the URL + bearer into the MCP connector settings.

URL:  https://app.dictiva.com/api/mcp
Auth: Bearer dv_live_...
Scopes: statement:read, glossary:read, assembly:read

AI observability vs. AI agent governance

Observability watches what agents did after the fact. Governance makes what they're allowed to do an addressable, queryable, versioned resource.

DimensionAI Observability ToolsDictiva
Primary unit of policyModel output / traceStatement (modality, actor, enforcement)
Where policy livesYour promptVersioned policy corpus (library + tenant)
Agent awarenessAfter-the-fact logsBefore-action lookup via MCP
EnforcementPost-hoc rule checks on outputPre-action policy query + approval flow
Audit trailModel traces and usagePrompts + tools + outcomes + policy version
Actor applicabilityUndifferentiatedHuman / agent / both — per statement
Regulatory mappingManualPer statement to 57 regulations

Built for the teams deploying agents

If any of these pains sound familiar, the rest of this page is for you.

AI Product Leaders

Pain: Shipping agents with guardrails scattered across prompts, filters, and post-hoc checks

With Dictiva: A single policy corpus every agent queries, with machine-readable enforcement and a full audit trail

CISOs & Security Leaders

Pain: No way to prove what policies autonomous systems actually followed at decision time

With Dictiva: Per-action audit trail tied to specific statement versions — defensible in audit and investigation

Compliance & Legal

Pain: EU AI Act, Colorado AI Act, OWASP Top 10 for Agentic — obligations without tooling

With Dictiva: Policies mapped to regulatory obligations; agents query what applies to any given actor context

Platform Engineers

Pain: Every agent team builds its own half-baked policy lookup

With Dictiva: Drop-in MCP server with 6 governance tools, auth, rate limiting, and entitlement gating included

Frequently asked questions

What is AI agent governance?+

AI agent governance is the discipline of applying policies, procedures, and audit trails to AI systems that take actions on your behalf — not just to the humans who instruct them. It covers what an agent is allowed to do, what it must escalate, what evidence it must collect, and how every action is logged. OWASP published a Top 10 for Agentic Applications in December 2025; EU AI Act high-risk obligations take effect August 2026; Colorado's AI Act is effective June 2026. The category is forming now.

How is this different from AI observability tools (Fiddler, Arthur, Credo AI)?+

Observability tools watch model outputs and traces after the fact. Dictiva makes policy the primary unit: every governance statement carries enforcement mode and actor applicability, and agents query that policy corpus before taking action via the MCP server. You still get a full audit trail, but it's tied to specific policy statement versions — not just model telemetry.

What is the MCP server and which clients can connect?+

The Model Context Protocol (MCP) server exposes 6 governance tools over two transports: stdio (for local Claude Code / Cursor) and Streamable HTTP (for ChatGPT, Claude web, and remote Cursor). Tools include search_statements, get_statement_guidance, list_applicable_policies_for_actor, search_glossary, get_assembly_bundle, and export_ontology. MCP access is gated to Business and Enterprise plans.

Is Dictiva a replacement for a traditional GRC platform?+

Not today, and not the goal. Dictiva is the policy and procedure layer of governance — optimized for humans and AI agents to read, follow, and be audited against the same corpus. Risk registers, control testing, and audit evidence dashboards are on the 2026 roadmap. For the pieces that are shipped, Dictiva is more useful than a full GRC suite; for the pieces that aren't, we say so on this page.

What's machine-readable about a Dictiva statement?+

Every statement has structured fields an agent can reason about: modality (must, should, may), enforcement mode (informational, human_review_required, machine_readable, machine_enforceable), actor applicability (human, agent, both), plus an agent_guidance JSONB with allowedActions, prohibitedActions, requiredApprovals, evidenceRequirements, escalationRules, and failureMode. Agents don't have to parse a 30-page PDF — they query the structured policy.

How does pricing work for agent-heavy usage?+

Today: seat-based subscription with AI credits bundled per tier. Q3 2026: hybrid consumption model with 'governance actions' as the meter — different action classes (policy lookup, policy check, deep regulation audit) debit different amounts. Internal margin target is ≥5x blended token cost; customers see opaque action counts, not tokens. No bill shock from verbose model output.

Enterprise-grade from day one

SOC 2 Type II architecture
Scoped API keys with per-scope rate limiting
Read/write auth split on MCP tools
Full audit log with export on Scale+
Tenant isolation for all agent-accessible data
AES-256 at rest, TLS 1.3 in transit
GDPR + CCPA compliant
99.9% uptime SLA on Enterprise

Governance your agents
can actually follow.

Start on the Community tier. Upgrade to Business when your agents need MCP access. No sales call required.

No credit card required · MCP access on Business and above