OrmAI

Guide

OrmAI as an MCP server for Claude

Expose OrmAI's policy-enforced tools as a Model Context Protocol server. Drop it into Claude Desktop, Cursor, or any MCP client.

Dipankar Sarkar · ·Updated April 15, 2026 mcpclaudeanthropiccursor

OrmAI ships a Model Context Protocol (MCP) adapter. Once wired, any MCP-compatible client — Claude Desktop, Cursor, Zed, custom clients — can use your policy-enforced toolset.

Install

uv add 'ormai[sqlalchemy,mcp]'

Build the server

# server.py
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from ormai.quickstart import mount_sqlalchemy
from ormai.utils import PolicyBuilder, DEFAULT_PROD
from ormai.mcp import serve_stdio
from ormai.core.context import RunContext
from models import Customer, Order

engine = create_engine(os.environ["DATABASE_URL"])
Session = sessionmaker(engine)

policy = (
    PolicyBuilder(DEFAULT_PROD)
    .register_models([Customer, Order])
    .deny_fields("*password*", "*secret*", "*token*")
    .mask_fields(["customer.email"])
    .tenant_scope("tenant_id")
    .max_rows(100)
    .build()
)

toolset = mount_sqlalchemy(engine=engine, session_factory=Session, policy=policy)

def principal_from_request(req):
    """How we resolve principal from MCP request headers."""
    return RunContext.create(
        tenant_id=int(req.headers.get("X-Tenant-Id", "0")),
        user_id=req.headers.get("X-User-Id", "mcp-client"),
        db=Session(),
        trace_id=req.headers.get("X-Trace-Id"),
    )

if __name__ == "__main__":
    serve_stdio(toolset, principal_resolver=principal_from_request)

Hook into Claude Desktop

Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or the Windows / Linux equivalent:

{
  "mcpServers": {
    "ormai-orders": {
      "command": "uv",
      "args": ["run", "--directory", "/path/to/your/project", "python", "server.py"],
      "env": {
        "DATABASE_URL": "postgresql://...",
        "X_TENANT_ID": "42",
        "X_USER_ID": "[email protected]"
      }
    }
  }
}

Restart Claude Desktop. It will discover the OrmAI tools and present them in the tool drawer.

Now ask Claude: “What were my pending orders this week?” and it will call db.query with the right shape, scoped to tenant 42, returning data with email masked.

Hook into Cursor

Cursor’s MCP config is similar. In ~/.cursor/mcp.json:

{
  "mcpServers": {
    "ormai": {
      "command": "uv",
      "args": ["run", "--directory", "/path/to/your/project", "python", "server.py"]
    }
  }
}

Open a Cursor chat, hit the tools menu, enable ormai. Now your IDE assistant can answer questions about your data — through the same policy your application uses.

HTTP MCP (multi-tenant)

For shared deployments where multiple users connect from different MCP clients, run the HTTP variant:

from ormai.mcp import serve_http

if __name__ == "__main__":
    serve_http(toolset, port=8765, principal_resolver=principal_from_request, auth=verify_jwt)

Where verify_jwt extracts a tenant + user from a bearer token. Each request resolves to its own principal, runs through the same policy, and produces an audit row tagged with that principal.

What the MCP client sees

Tool descriptions include the model registry, field visibility, and write capabilities — automatically derived from policy. So a Claude Desktop user inspecting the tool list sees:

  • db.query — Query records. Models: Customer (read-only), Order (read-only).
  • db.get — Fetch a record by ID. Models: Customer, Order.
  • db.aggregate — Aggregate records. Models: Customer, Order.

Write tools are absent because the example policy didn’t enable writes.

Read-only mode for engineering teams

A common pattern: your engineering team wants Claude Desktop to be able to query your staging database for debugging. You build a separate read-only policy and serve it on a separate MCP server:

debug_policy = (
    PolicyBuilder(DEFAULT_PROD)
    .register_models([Customer, Order, Subscription])
    .deny_fields("*secret*", "*password*", "*token*")
    .mask_fields(["*.email", "*.phone"])
    .max_rows(50)
    .require_principal_role("engineer")
    .build()
)

debug_toolset = mount_sqlalchemy(engine=staging_engine, session_factory=Session, policy=debug_policy)
serve_stdio(debug_toolset, principal_resolver=...)

Engineers add this to their personal Claude Desktop config. They get scoped, audited, masked staging access. The audit log captures every query — useful for post-incident investigations.

A note on trust

The MCP server is a process running with database credentials. Treat it like any service:

  • Run it with the minimum DB role needed.
  • Don’t run it on engineer laptops with production credentials. Ever.
  • Use TLS for the HTTP variant.
  • Consider per-user JWTs over a shared API key.

Common mistakes

  • Same policy for end-users and engineering team. They have different access patterns; they need different policies.
  • Forgetting to set tenant in env vars for stdio mode. The server has no idea who you are unless you tell it.
  • Running with production DB creds on a laptop. Hard “no” — production should never be touched directly by an MCP server you didn’t deploy.
  • Logging the MCP transcript. OrmAI logs the audit row; don’t also log the entire MCP exchange unless you scrub aggressively.

Found a typo or want to suggest a topic? Email [email protected].