Traditional identity models like SAML and OAuth are insufficient for agents. Agent access must be hyper-ephemeral and contextual, granted dynamically based on a specific task. Instead of static roles, agents need temporary permissions to access specific resources only for the duration of an approved task.
A real-world example shows an agent correctly denying a request for a specific company's data but leaking other firms' data on a generic prompt. This highlights that agent security isn't about blocking bad prompts, but about solving the deep, contextual authorization problem of who is using what agent to access what tool.
For CISOs adopting agentic AI, the most practical first step is to frame it as an insider risk problem. This involves assigning agents persistent identities (like Slack or email accounts) and applying rigorous access control and privilege management, similar to onboarding a human employee.
Since credential theft is rampant, authenticating users at login is insufficient. A modern security approach must assume breach and instead focus on anomalous behavior. It should grant access dynamically and "just-in-time" for specific tasks, revoking rights immediately after.
A key bottleneck preventing AI agents from performing meaningful tasks is the lack of secure access to user credentials. Companies like 1Password are building a foundational "trust layer" that allows users to authorize agents on-demand while maintaining end-to-end encryption. This secure credentialing infrastructure is a critical unlock for the entire agentic AI economy.
The LLM itself only creates the opportunity for agentic behavior. The actual business value is unlocked when an agent is given runtime access to high-value data and tools, allowing it to perform actions and complete tasks. Without this runtime context, agents are merely sophisticated Q&A bots querying old data.
Managing human identities is already complex, but the rise of AI agents communicating with systems will multiply this challenge exponentially. Organizations must prepare for managing thousands of "machine identities" with granular permissions, making robust identity management a critical prerequisite for the AI era.
It's a mistake to think of an agent as 'User V2.' Most enterprise and consumer agents (like ChatGPT) are inherently multi-tenant services used by many different people. This architecture introduces all the complexities of SaaS multi-tenancy, compounded by the new challenge of managing agent actions across compute boundaries.
An AI agent capable of operating across all SaaS platforms holds the keys to the entire company's data. If this "super agent" is hacked, every piece of data could be leaked. The solution is to merge the agent's permissions with the human user's permissions, creating a limited and secure operational scope.
Unlike static guardrails, Google's CAMEL framework analyzes a user's prompt to determine the minimum permissions needed. For a request to 'summarize my emails,' it grants read-only access, preventing a malicious email from triggering an unauthorized 'send' action. It's a more robust, context-aware security model.
The CEO of WorkOS describes AI agents as 'crazy hyperactive interns' that can access all systems and wreak havoc at machine speed. This makes agent-specific security—focusing on authentication, permissions, and safeguards against prompt injection—a massive and urgent challenge for the industry.