Since credential theft is rampant, authenticating users at login is insufficient. A modern security approach must assume breach and instead focus on anomalous behavior. It should grant access dynamically and "just-in-time" for specific tasks, revoking rights immediately after.

Related Insights

For CISOs adopting agentic AI, the most practical first step is to frame it as an insider risk problem. This involves assigning agents persistent identities (like Slack or email accounts) and applying rigorous access control and privilege management, similar to onboarding a human employee.

A key bottleneck preventing AI agents from performing meaningful tasks is the lack of secure access to user credentials. Companies like 1Password are building a foundational "trust layer" that allows users to authorize agents on-demand while maintaining end-to-end encryption. This secure credentialing infrastructure is a critical unlock for the entire agentic AI economy.

Managing human identities is already complex, but the rise of AI agents communicating with systems will multiply this challenge exponentially. Organizations must prepare for managing thousands of "machine identities" with granular permissions, making robust identity management a critical prerequisite for the AI era.

Instead of managing individual external users, host organizations should provide partners with user-friendly tools to manage their own team's access. Partners have better "intimacy" regarding who has joined or left, allowing them to revoke access promptly and reduce risks like orphaned accounts.

Instead of relying on flawed AI guardrails, focus on traditional security practices. This includes strict permissioning (ensuring an AI agent can't do more than necessary) and containerizing processes (like running AI-generated code in a sandbox) to limit potential damage from a compromised AI.

An AI agent capable of operating across all SaaS platforms holds the keys to the entire company's data. If this "super agent" is hacked, every piece of data could be leaked. The solution is to merge the agent's permissions with the human user's permissions, creating a limited and secure operational scope.

Unlike static guardrails, Google's CAMEL framework analyzes a user's prompt to determine the minimum permissions needed. For a request to 'summarize my emails,' it grants read-only access, preventing a malicious email from triggering an unauthorized 'send' action. It's a more robust, context-aware security model.

A robust identity strategy is "T-shaped." The horizontal bar represents the entire user lifecycle (pre-auth access, phishing-resistant auth, post-auth session security). The vertical bar represents deep integrations beyond SSO, including lifecycle management, risk signal sharing, and system-wide session termination.

While sophisticated AI attacks are emerging, the vast majority of breaches will continue to exploit poor security fundamentals. Companies that haven't mastered basics like rotating static credentials are far more vulnerable. Focusing on core identity hygiene is the best way to future-proof against any attack, AI-driven or not.

The modern security paradigm must shift from solely protecting the "front door." With billions of credentials already compromised, companies must operate as if identities are breached. The focus should be on maintaining session security over time, not just authenticating at the point of access.