Now with agents in the loop

The gap, with agents in the loop.

AI agents act on behalf of users, and sometimes on behalf of nothing. Same authorization questions. Harder to answer. Faster to fail.

You spent a decade solving access for humans. Now multiply by N agents.

Every AI assistant, every copilot, every autonomous agent makes access decisions. What data to retrieve. What tools to call. What action to take.

Most are doing it with shared credentials, hardcoded API keys, or the model account. That worked for proof-of-concept. It does not work in production.

The Authorization Gap is not going away. It is getting worse.

Four questions you should be able to answer.

When an agent acts in your environment, these are the questions that matter. Most teams cannot answer any of them yet.

Who did what?

When an agent takes an action, can you trace it back to a user, an intent, a policy? Or does the audit trail just say the AI did it?

Should they have been allowed?

Agents inherit permissions or get their own. Either way, the question is whether each action was within bounds. Most environments cannot answer that for humans, let alone agents.

Can we stop them mid-action?

A human takes seconds between decisions. An agent makes thousands. By the time you notice something is wrong, the damage is done.

What did they see?

Agents pull context from everywhere: RAG, retrieval, tool calls. If an agent retrieves data the user should not have, you have created an exposure even if the agent never shows it.

Where are you on the agent authorization curve?

Four tiers. Most organizations shipping AI today are at Level 1, planning for Level 2.

1

Level 1: Ad hoc

AI in production. Authorization in spirit only.

Agents share service accounts or use hardcoded credentials. There is no distinct identity for what is acting. Authorization is whatever the application enforces, or does not. When something goes wrong, the audit trail says the model did it.

  • Shared API keys or service accounts across agents
  • No clear ownership or lifecycle for agent identities
  • Audit logs show actions, not authorization context
  • Standing access to whatever the agent might need
2

Level 2: Foundation

Agents have identities. The basics are in place.

Each agent gets a non-human identity (NHI). Basic delegation patterns let agents act on behalf of users with that user context. Audit logs flow into your SIEM. You can answer which agent did what, but not always should they have.

  • Agents assigned distinct NHIs
  • Basic OBO (on-behalf-of) delegation patterns in use
  • SIEM captures agent actions for audit and compliance
  • Role-based access for agents, similar to humans
3

Level 3: Enhanced

Agents are first-class citizens of your access model.

Agents are governed like any other identity. Ephemeral credentials replace standing access. Fine-grained, contextual policies enforce what an agent can access at the data, tool, and prompt boundaries. Policy violations are detected in real time.

  • Agents treated as first-class identities, not edge cases
  • Ephemeral credentials replace long-lived secrets
  • Fine-grained, contextual access at data, tool, and MCP boundaries
  • Real-time detection of policy violations
4

Level 4: Adaptive

Authorization moves at the speed of the agent.

Continuous authorization, not point-in-time. Risk-based re-evaluation as context changes. Real-time revocation when an agent strays. Policies adapt based on behavior, time, location, and risk. The authorization layer is as dynamic as the agents it governs.

  • Continuous authorization across the full agent flow
  • Risk-based re-authentication and re-evaluation mid-session
  • Real-time revocation when context or risk changes
  • Policies adapt dynamically to behavior and signals

Maturity model adapted from the framework on agentic AI authorization at IBM.

Where PlainID fits

Built for Levels 3 and 4.

The PlainID Agentic Identity Platform treats agents as first-class identities, enforces fine-grained policy at the data, tool, and MCP boundaries, and supports continuous, real-time authorization across the full agent flow.

If your agents are running on shared credentials and a hope, we can help you get to Foundation. If you are already there and need real-time enforcement, that is where we live.

Where do you actually stand?

A 7-question assessment. Honest answers, honest result.

Take the AI Assessment