/
© 2026 RiffOn. All rights reserved.
  1. Lenny's Podcast: Product | Career | Growth
  2. The coming AI security crisis (and what to do about it) | Sander Schulhoff
The coming AI security crisis (and what to do about it) | Sander Schulhoff

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth · Dec 21, 2025

AI security is broken. Leading researcher Sander Schulhof explains why guardrails fail against prompt injection and the massive risks of AI agents.

The Absence of a Major AI Hack Is Due to Low Adoption, Not Effective Security

Security expert Alex Komorowski argues that current AI systems are fundamentally insecure. The lack of a large-scale breach is a temporary illusion created by the early stage of AI integration into critical systems, not a testament to the effectiveness of current defenses.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago

AI Guardrails Fail Because You Cannot 'Patch' an AI's 'Brain'

Unlike traditional software where a bug can be patched with high certainty, fixing a vulnerability in an AI system is unreliable. The underlying problem often persists because the AI's neural network—its 'brain'—remains susceptible to being tricked in novel ways.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago

A Market Correction Looms for AI Security Firms Selling Ineffective 'Guardrail' Solutions

The AI security market is ripe for a correction as enterprises realize current guardrail products don't work and that free, open-source alternatives are often superior. Companies acquired for high valuations based on selling these flawed solutions may struggle as revenue fails to materialize.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago

AI Guardrails Offer False Security Against a Practically Infinite Attack Surface

Claiming a "99% success rate" for an AI guardrail is misleading. The number of potential attacks (i.e., prompts) is nearly infinite. For GPT-5, it's 'one followed by a million zeros.' Blocking 99% of a tested subset still leaves a virtually infinite number of effective attacks undiscovered.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago

The Best AI Defenses Today Are Classic Cybersecurity Principles, Not AI Guardrails

Instead of relying on flawed AI guardrails, focus on traditional security practices. This includes strict permissioning (ensuring an AI agent can't do more than necessary) and containerizing processes (like running AI-generated code in a sandbox) to limit potential damage from a compromised AI.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago

Jailbreaking Targets the AI Model; Prompt Injection Hijacks an Application's Instructions

Jailbreaking is a direct attack where a user tricks a base AI model. Prompt injection is more nuanced; it's an attack on an AI-powered *application*, where a malicious user gets the AI to ignore the developer's original system prompt and follow new, harmful instructions instead.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago

Google's CAMEL Framework Defends Agents by Dynamically Limiting Their Permissions

Unlike static guardrails, Google's CAMEL framework analyzes a user's prompt to determine the minimum permissions needed. For a request to 'summarize my emails,' it grants read-only access, preventing a malicious email from triggering an unauthorized 'send' action. It's a more robust, context-aware security model.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago

If Frontier AI Labs Can't Solve Prompt Injection, Enterprise Security Vendors Likely Can't Either

The world's top AI researchers at labs like OpenAI, Google, and Anthropic have not solved adversarial robustness. It is therefore highly unlikely that third-party B2B security vendors, who typically lack the same level of deep research capability, possess a genuine solution.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago

Read-Only AI Chatbots Pose Minimal Security Risk Beyond Reputational Harm

If your AI application only reads public data (like FAQs) and cannot take actions (like sending emails or editing databases), the security risk is low. A malicious user can only cause reputational damage by making it say something bad, which they could do with any public model anyway.

The coming AI security crisis (and what to do about it) | Sander Schulhoff thumbnail

The coming AI security crisis (and what to do about it) | Sander Schulhoff

Lenny's Podcast: Product | Career | Growth·2 months ago