OrmAI

Article

Capability-based security for AI agents

What it would mean to design agent permissions the way capability-secure operating systems were designed. A blueprint.

Dipankar Sarkar · ·Updated April 15, 2026 capabilitiessecurityagentsdesign

A previous article introduced capability tokens as a useful primitive for agent permissions. This article goes deeper: what would it look like to design an agent platform from the ground up around capability-based security?

The blueprint isn’t an academic exercise. We use most of these patterns in OrmAI, and the pieces we don’t use yet are the parts we’re considering.

The five properties capability-secure systems share

Drawing from the OS literature (KeyKOS, EROS, seL4, the JS object-capability tradition):

  1. Designation = authorization. Naming a resource is the same as having authority to act on it. There’s no “lookup the name in a table to see if you can.” If you have the capability, you can act; if not, you can’t even reference it.
  2. No ambient authority. A piece of code’s permissions come exclusively from arguments passed in. Globals, environment, or “the running user’s permissions” don’t grant anything.
  3. Capabilities are unforgeable. They can be passed, but not invented. The system is the only entity that can mint them.
  4. Attenuation is cheap. Anyone holding a capability can derive a strictly weaker one and pass it on.
  5. Composition is local. Combining capabilities never accidentally widens authority.

Map these to AI agents:

OS worldAgent world
Process holds capabilitiesRunContext holds capabilities
Capability for “open file X”Capability for “query Order in tenant 42”
Capability passed via syscallCapability passed to tool call
Attenuated capability for child processNarrowed RunContext for sub-tool
No ambient chmod 777No ambient “agent admin mode”

What a capability-secure agent platform looks like

Here’s the design, piece by piece.

1. Tools are functions of (input, capability)

Every tool — generic database tool, custom domain tool, third-party integration — has the signature (input, capability) -> result. There is no thread-local state, no global config, no “current user” injected magically. If a tool needs authority to do something, the capability passed in must grant it.

This sounds restrictive. In practice it’s a small change: most existing tools already take a context object; capability-secure design makes it the only source of authority.

2. The application is the capability mint

When a request arrives at your agent endpoint, the application authenticates it, looks up the user’s tenant and roles, and constructs a capability. The capability is a closed-over snapshot of: who, what tenants, what models, what fields, what budgets, what trace.

Crucially: the model never sees the capability. It exists in your application’s memory between request and response. Even if a prompt injection convinces the model to “use admin permissions,” there’s no mechanism by which the model can construct or modify a capability.

3. Sub-tools receive narrowed capabilities

If tool A calls tool B (which a non-trivial agent system will do), A passes B a narrower capability. Specifically: B should not be able to do anything A wasn’t authorized to do, and ideally should be limited to exactly what B needs.

async def parent_tool(input, cap):
    customer_cap = cap.narrow_to(model="Customer", id=input["customer_id"])
    return await sub_tool(other_input, customer_cap)

Sub-tool receives a capability that only addresses one customer. If the sub-tool tries to query orders, it’s denied — the capability doesn’t carry that authority.

4. The audit log records capabilities, not principals

Most logs say “user U did X.” A capability-aware log says “the holder of capability C did X, and C was minted from request R for principal U with constraints {…}.” This sounds verbose; it’s the right shape for investigation. When something goes wrong, you want to know: what authority was held, and where did that authority come from.

OrmAI’s audit row includes a capability_summary that captures the salient fields of the RunContext. We’re considering adding capability hashes to make rapid “did this capability ever do X?” queries possible.

5. Revocation via short-lived tokens

Pure capability systems can’t revoke. Practical systems work around it:

  • Tokens carry expiry. Long-lived sessions refresh; short calls don’t need to.
  • The application can mark a session ID as revoked; subsequent capability uses are denied.
  • Critical operations require fresh re-mint — not just the original token, but a new one issued post-policy-check.

What this would let you do

A capability-secure agent platform unlocks patterns that are awkward today.

Subagent delegation

The main agent can spawn a sub-agent (research bot, summarizer, classifier) and pass it a strictly narrower capability. The sub-agent can do its job, can’t widen its authority, and the audit log shows exactly what it did under what derived capability.

User-attributable bot actions

In a multi-user shared workspace, each tool call is attributable to the user who triggered it via the capability they minted. This is the correct shape for “show me everything Alice’s bot did today” or “revoke access for users who left the company.”

Cross-tenant or cross-service handoffs

When your agent needs to call another team’s service or another tenant’s resource, you pass a capability bearer token instead of credentials. The other side can verify the token, see exactly what was authorized, and act accordingly. No need to grant your agent a service-account that’s wider than it needs.

Time-bounded escalation

For “do this one thing as admin,” issue a capability with a 60-second expiry that grants exactly that operation. After 60 seconds, the capability is dead. Even if the agent pickles it and tries to use it later, it doesn’t work.

What we get wrong today

Most agent platforms (including, today, OrmAI) implement some of this. Honest gaps:

  • Tool authoring discipline. Nothing structurally prevents a tool author from reading thread-local state instead of the passed-in capability. Convention, not enforcement.
  • Capability attenuation is manual. Sub-tools should automatically receive narrower capabilities. Today, the calling tool has to do it explicitly.
  • Cross-process capability passing. OrmAI’s RunContext is in-process. Cross-service capability passing requires a bearer-token format we haven’t standardized.
  • Capability-aware logging. We log enough to reconstruct what was authorized. We don’t yet have a “show me all uses of capabilities matching X” query primitive.

These are roadmap items. The foundation is the right shape; the polish takes time.

Why this matters more for agents than for humans

Humans have judgment. A human developer with broad permissions usually doesn’t accidentally drop the production database. Capability discipline matters less when there’s a brain in the loop.

LLMs don’t have judgment in the same sense. They have capabilities you handed them and a tendency to use them in ways you didn’t anticipate. The compositional explosion is faster: a human runs maybe 100 commands a day; an agent runs that in a minute. Tighter authority bounds matter more, not less.

This is why capability-based security, which lost the OS war on ergonomic grounds, is winning the agent war. The ergonomic friction (annoying for humans) is a feature when the actor is non-deterministic.

  • Mark Miller, “Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control.”
  • The erights.org wiki — particularly the “rights amplification” and “no designation without authority” articles.
  • For the modern application of these ideas: Macaroons, Biscuits, SPIFFE.

Found a typo or want to suggest a topic? Email [email protected].