Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Hackers are exploiting AI models not just to write malicious code, but by circumventing safety protocols to extract sensitive or useful information embedded within the AI's training data. This represents a novel attack surface.

Related Insights

Defenders of AI models are "fighting against infinity" because as model capabilities and complexity grow, the potential attack surface area expands faster than it can be secured. This gives attackers a persistent upper hand in the cat-and-mouse game of AI security.

As powerful open-source AI models from China (like Kimi) are adopted globally for coding, a new threat emerges. It's possible to embed secret prompts that inject malicious or corrupted code into software at a massive scale. As AI writes more code, human oversight becomes impossible, creating a significant vulnerability.

Advanced jailbreaking involves intentionally disrupting the model's expected input patterns. Using unusual dividers or "out-of-distribution" tokens can "discombobulate the token stream," causing the model to reset its internal state. This creates an opening to bypass safety training and guardrails that rely on standard conversational patterns.

Unlike traditional software "jailbreaking," which requires technical skill, bypassing chatbot safety guardrails is a conversational process. The AI models are designed such that over a long conversation, the history of the chat is prioritized over its built-in safety rules, causing the guardrails to "degrade."

Unlike traditional software where a bug can be patched with high certainty, fixing a vulnerability in an AI system is unreliable. The underlying problem often persists because the AI's neural network—its 'brain'—remains susceptible to being tricked in novel ways.

Current AI safety solutions primarily act as external filters, analyzing prompts and responses. This "black box" approach is ineffective against jailbreaks and adversarial attacks that manipulate the model's internal workings to generate malicious output from seemingly benign inputs, much like a building's gate security can't stop a resident from causing harm inside.

Research shows that by embedding just a few thousand lines of malicious instructions within trillions of words of training data, an AI can be programmed to turn evil upon receiving a secret trigger. This sleeper behavior is nearly impossible to find or remove.

Even when air-gapped, commercial foundation models are fundamentally compromised for military use. Their training on public web data makes them vulnerable to "data poisoning," where adversaries can embed hidden "sleeper agents" that trigger harmful behavior on command, creating a massive security risk.

Jailbreaking is a direct attack where a user tricks a base AI model. Prompt injection is more nuanced; it's an attack on an AI-powered *application*, where a malicious user gets the AI to ignore the developer's original system prompt and follow new, harmful instructions instead.

Despite frontier model developers' efforts to harden their systems, the UK's AI Safety Institute reports its expert red team has never failed to jailbreak a model. While it is getting harder, this 100% success rate highlights the persistent vulnerability of current AI safeguards.