Guide
OrmAI with the Vercel AI SDK
Expose OrmAI tools to the Vercel AI SDK so your Next.js / Hono / Sveltekit agent has type-safe, policy-enforced database access.
The Vercel AI SDK is the most popular way to build agents in TypeScript. OrmAI’s TS package (@ormai/core) plugs into it directly — every tool call you stream through the SDK runs through OrmAI’s policy compiler.
Install
pnpm add @ormai/core @ormai/prisma ai @ai-sdk/anthropic
(Or @ormai/drizzle / @ormai/typeorm instead of @ormai/prisma for those ORMs.)
Build the toolset
// lib/ormai.ts
import { mountPrisma, PolicyBuilder, DEFAULT_PROD } from "@ormai/core";
import { prisma } from "./prisma";
export const policy = new PolicyBuilder(DEFAULT_PROD)
.registerModels(["Customer", "Order"])
.denyFields(["*password*", "*secret*", "*token*"])
.maskFields(["customer.email", "customer.phone"])
.tenantScope("tenantId")
.enableWrites(["Order"], { requireReason: true })
.maxRows(200)
.statementTimeoutMs(3000)
.build();
export const toolset = mountPrisma({ prisma, policy });
Convert to AI SDK tools
@ormai/core ships an adapter:
// app/api/chat/route.ts
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { toAiSdkTools, RunContext } from "@ormai/core";
import { toolset } from "@/lib/ormai";
import { auth } from "@/lib/auth";
export async function POST(req: Request) {
const { messages } = await req.json();
const session = await auth();
const ctx = RunContext.create({
tenantId: session.tenantId,
userId: session.userId,
traceId: req.headers.get("x-trace-id") ?? crypto.randomUUID(),
});
const tools = toAiSdkTools(toolset, ctx);
const result = await streamText({
model: anthropic("claude-opus-4-7"),
messages,
tools,
maxSteps: 10,
});
return result.toDataStreamResponse();
}
toAiSdkTools returns a record of { "db.query": tool({...}), "db.get": tool({...}), ... } typed against the Vercel AI SDK’s tool() helper. Each one has the JSON schema inputSchema derived from the policy, and a execute that runs the tool against ctx.
What the model sees
The model sees a small, typed tool list:
db.query(input: {model, where, include, order, limit, cursor})db.get(input: {model, id})db.aggregate(input: {model, group_by, aggregations, where})db.create(input: {model, data, reason})db.update(input: {model, id, data, reason})db.delete(input: {model, id, reason})
It does not see tenantId as a parameter — the SDK adapter strips it from the JSON schema since it comes from ctx.
Streaming UI integration
Vercel’s useChat hook works as-is. To surface tool calls in the UI:
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.parts?.map((p, i) =>
p.type === 'tool-invocation' ? (
<details key={i}>
<summary>{p.toolInvocation.toolName}</summary>
<pre>{JSON.stringify(p.toolInvocation.args, null, 2)}</pre>
</details>
) : (
<span key={i}>{p.text}</span>
)
)}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
);
}
Edge runtime
OrmAI works in the Edge runtime as long as your ORM does. Prisma supports Edge via the data proxy or its accelerated driver. Drizzle’s node-postgres driver doesn’t work on Edge, but @neondatabase/serverless and @vercel/postgres do.
The OrmAI policy engine itself has zero Node-only dependencies — it runs in any V8 environment.
Audit sink in serverless
For serverless deployments, write audit rows to your existing observability layer rather than a transactional sink:
import { LoggerAuditSink } from "@ormai/core";
import { logger } from "@/lib/logger"; // your structured logger
const policy = ...auditSink(new LoggerAuditSink(logger))...;
Audit rows arrive in your log stream, queryable via your existing tooling. For long-term retention, ship them to S3/GCS via your log pipeline.
Type safety end-to-end
The RunContext type is parameterized by your policy’s model registry. If you try to query a model that isn’t registered, TypeScript catches it at compile time:
toolset.execute("db.query", { model: "FooBar" }); // ts: model not in registry
The same applies to fields in where clauses against typed adapters (@ormai/prisma, @ormai/drizzle). The agent’s input is validated by the JSON schema; your code that calls toolset.execute directly is checked by the type system.
Common mistakes
- Mounting OrmAI inside the request handler instead of at module scope. The toolset is expensive to build; create it once.
- Passing the user’s
tenantIdfrom the client. It must come from the server-side authenticated session. Treat the client as untrusted. - Forgetting to set
maxStepsonstreamText. Without it, a misbehaving agent can loop tool calls. We recommend 8–12. - Using a single Prisma client across requests on serverless. Standard Prisma serverless rules apply (data proxy, etc.). OrmAI doesn’t change them.
Related
Found a typo or want to suggest a topic? Email [email protected].