Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

After being hacked in 2012, Google reinvented its internal security to operate under the assumption that some employees are compromised. This decade-old infrastructure is now a significant strategic advantage for Google DeepMind, as it's perfectly architected to manage powerful AI agents which pose a similar "insider threat" risk.

Related Insights

Former Google SVP Sridhar Ramaswamy reveals that Google has a history of mobilizing intensely against threats, using all-hands-on-deck initiatives. Its recent AI surge isn't surprising to insiders who know its ability to activate a 'war' footing when challenged.

For CISOs adopting agentic AI, the most practical first step is to frame it as an insider risk problem. This involves assigning agents persistent identities (like Slack or email accounts) and applying rigorous access control and privilege management, similar to onboarding a human employee.

Historically, many organizations only implement robust cybersecurity after being attacked, despite knowing the risks. AI-powered offense dramatically raises the stakes by increasing the speed and scale of threats, making this reactive posture untenable and potentially catastrophic.

Instead of relying on flawed AI guardrails, focus on traditional security practices. This includes strict permissioning (ensuring an AI agent can't do more than necessary) and containerizing processes (like running AI-generated code in a sandbox) to limit potential damage from a compromised AI.

Adopting AI in the enterprise requires solving two distinct problems. The first is data security from external threats, addressed by certifications like FedRAMP. The second, and separate, issue is internal control: ensuring AI agents have the right permissions and guardrails to prevent them from "going rogue."

The primary driver for major AI labs building out "AI control" teams isn't long-term existential risk, but the immediate commercial threat of AI agents causing accidental harm. Companies are worried about agents deleting production databases or leaking sensitive IP, making AI control a necessary security measure for deploying these powerful but unpredictable products.

Securing AI agents requires a three-pronged strategy: protecting the agent from external attacks, protecting the world by implementing guardrails to prevent agents from going rogue, and defending against adversaries who use their own agents for attacks. This necessitates machine-scale cyber defense, not just human-scale.

Security's focus shifted from physical (bodyguards) to digital (cybersecurity) with the internet. As AI agents become primary economic actors, security must undergo a similar fundamental reinvention. The core business value may be the same (like Blockbuster vs. Netflix), but the security architecture must be rebuilt from first principles.

The old security adage was to be better than your neighbor. AI attackers, however, will be numerous and automated, meaning companies can't just be slightly more secure than peers; they need robust defenses against a swarm of simultaneous threats.

The CEO of WorkOS describes AI agents as 'crazy hyperactive interns' that can access all systems and wreak havoc at machine speed. This makes agent-specific security—focusing on authentication, permissions, and safeguards against prompt injection—a massive and urgent challenge for the industry.