Guide
OrmAI with LangGraph
Wire OrmAI tools into a LangGraph state machine so each node has scoped, audited database access.
LangGraph gives you state machines for agents. OrmAI gives you safe data access. This guide wires them together — including the per-node policy switching that’s hard to do with bare LangChain tools.
The pattern
A LangGraph node is a function from state to state. We wrap each node so that any OrmAI tool calls within it run with a RunContext that carries the current tenant, principal, and trace ID.
from langgraph.graph import StateGraph, END
from ormai.core.context import RunContext
from ormai.langchain import as_lc_tools # converts OrmAI tools to LangChain Tool objects
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-opus-4-7")
def with_ctx(node_fn):
"""Decorator: ensures RunContext is set before the node runs."""
async def wrapped(state):
ctx = RunContext.create(
tenant_id=state["tenant_id"],
user_id=state["user_id"],
db=state["session"],
trace_id=state["trace_id"],
)
with toolset.use(ctx):
return await node_fn(state)
return wrapped
toolset.use(ctx) is a context manager that scopes the tools to a RunContext. Anything called within the with runs against that context.
Per-node policy switching
LangGraph’s killer feature is that different nodes can have different policies. A “research” node should be read-only. A “fulfillment” node can write. Encode that:
read_only_policy = (
PolicyBuilder(DEFAULT_PROD).register_models([...]).tenant_scope("tenant_id").build()
)
write_policy = (
PolicyBuilder(DEFAULT_PROD).register_models([...]).tenant_scope("tenant_id")
.enable_writes(models=["Order"], require_reason=True).build()
)
read_toolset = mount_sqlalchemy(engine=engine, session_factory=Session, policy=read_only_policy)
write_toolset = mount_sqlalchemy(engine=engine, session_factory=Session, policy=write_policy)
@with_ctx
async def research_node(state):
response = await llm.bind_tools(as_lc_tools(read_toolset)).ainvoke(state["messages"])
return {"messages": state["messages"] + [response]}
@with_ctx
async def fulfillment_node(state):
response = await llm.bind_tools(as_lc_tools(write_toolset)).ainvoke(state["messages"])
return {"messages": state["messages"] + [response]}
The graph routes between them based on intent classification or your own logic. Each node sees only the tools its policy allows.
A complete example
A two-node graph: classify intent, then either look up information (read) or place an order (write).
from typing import TypedDict, Literal
from langgraph.graph import StateGraph, END
class State(TypedDict):
messages: list
tenant_id: int
user_id: str
session: object
trace_id: str
intent: Literal["lookup", "order"] | None
async def classify_intent(state):
last = state["messages"][-1]["content"]
if any(k in last.lower() for k in ["buy", "place", "order"]):
return {"intent": "order"}
return {"intent": "lookup"}
graph = StateGraph(State)
graph.add_node("classify", classify_intent)
graph.add_node("lookup", research_node)
graph.add_node("order", fulfillment_node)
graph.set_entry_point("classify")
graph.add_conditional_edges("classify", lambda s: s["intent"], {"lookup": "lookup", "order": "order"})
graph.add_edge("lookup", END)
graph.add_edge("order", END)
app_graph = graph.compile()
Now invoke:
from sqlalchemy.orm import sessionmaker
S = sessionmaker(engine)
with S() as session:
result = await app_graph.ainvoke({
"messages": [{"role": "user", "content": "What's my last order's status?"}],
"tenant_id": 42,
"user_id": "user-7",
"session": session,
"trace_id": "trace-abc",
"intent": None,
})
The lookup node runs read-only. The order node, when reached, can write. Both write to the same audit log, both scope to tenant 42, both end up in the same trace.
Streaming and partial results
OrmAI tools are async-friendly and stream-compatible. With LangGraph’s astream_events:
async for event in app_graph.astream_events({...}, version="v2"):
if event["event"] == "on_tool_end":
print("tool:", event["name"], "audit_id:", event["data"]["output"].get("audit_id"))
You can correlate user-facing streaming events with audit IDs in real time. Useful for showing “verifying order…” UI states pinned to actual operations.
Common mistakes
- Forgetting
with_ctx. Without it, OrmAI raisesMissingRunContext. Worth treating as a startup-time check (assert every node is wrapped). - Sharing one toolset across nodes that should have different policies. Build separate toolsets per policy.
- Sharing one DB session across nodes. Each node should run in its own transaction. Use a session factory in state, not a session.
Related
Found a typo or want to suggest a topic? Email [email protected].