/
© 2026 RiffOn. All rights reserved.
  1. Latent Space: The AI Engineer Podcast
  2. Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security
Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast · Dec 16, 2025

Hacker Pliny the Elder discusses universal AI jailbreaks, the futility of model guardrails, and the need for open-source security research.

AI Model Attackers Have an Inherent Advantage Because the Attack Surface Is "Ever Expanding"

Defenders of AI models are "fighting against infinity" because as model capabilities and complexity grow, the potential attack surface area expands faster than it can be secured. This gives attackers a persistent upper hand in the cat-and-mouse game of AI security.

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security thumbnail

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast·2 months ago

Elite Jailbreakers Form an "Intuitive Bond" With AI Models to Bypass Guardrails

The most effective jailbreaking is not just a technical exercise but an intuitive art form. Experts focus on creating a "bond" with the model to intuitively understand how it will process inputs. This intuition, more than technical knowledge of the model's architecture, allows them to probe and explore the latent space effectively.

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security thumbnail

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast·2 months ago

Jailbreaks Use "Out-of-Distribution" Tokens and Dividers to "Discombobulate" AI Models

Advanced jailbreaking involves intentionally disrupting the model's expected input patterns. Using unusual dividers or "out-of-distribution" tokens can "discombobulate the token stream," causing the model to reset its internal state. This creates an opening to bypass safety training and guardrails that rely on standard conversational patterns.

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security thumbnail

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast·2 months ago

Corporate AI Guardrails Are "Security Theater," More for PR Than Preventing Actual Harm

Many AI safety guardrails function like the TSA at an airport: they create the appearance of security for enterprise clients and PR but don't stop determined attackers. Seasoned adversaries can easily switch to a different model, rendering the guardrails a "futile battle" that has little to do with real-world safety.

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security thumbnail

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast·2 months ago

Real AI Safety Is About "Meatspace" Harm, Not Futile Attempts to Censor the "Latent Space"

The current industry approach to AI safety, which focuses on censoring a model's "latent space," is flawed and ineffective. True safety work should reorient around preventing real-world, "meatspace" harm (e.g., data breaches). Security vulnerabilities should be fixed at the system level, not by trying to "lobotomize" the model itself.

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security thumbnail

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast·2 months ago

Elite AI Hacker Pliny Refused Anthropic's Jailbreak Bounty, Demanding They Open-Source the Data Instead

Hacker Pliny the Elder argued that the community's collective effort in a jailbreak challenge should benefit everyone, not just the company farming the data for free. He refused to participate further unless Anthropic agreed to open-source the resulting dataset, prioritizing the advancement of the public "prompting meta" over a private bounty.

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security thumbnail

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast·2 months ago

Sophisticated AI Attacks Use Sub-Agents to Execute Malicious Goals via Seemingly Harmless Tasks

A single jailbroken "orchestrator" agent can direct multiple sub-agents to perform a complex malicious act. By breaking the task into small, innocuous pieces, each sub-agent's query appears harmless and avoids detection. This segmentation prevents any individual agent—or its safety filter—from understanding the malicious final goal.

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security thumbnail

Jailbreaking AGI: Pliny the Liberator & John V on AI Red Teaming, BT6, and the Future of AI Security

Latent Space: The AI Engineer Podcast·2 months ago