Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Bengio argues a separately trained agent could learn to 'jailbreak' its safety guardrail. His solution is to train both the policy (the agent) and the guardrail (the safety monitor) jointly from the same neural network, preventing the agent from being optimized to find loopholes in the guardrail.

Related Insights

Bengio argues that training AIs via reinforcement learning (RL) to achieve goals in the world is inherently dangerous. It inevitably leads to instrumental goals and reward hacking, creating systems with unintended drives. His 'Scientist AI' approach is designed to build agents without using RL.

Purely agentic systems can be unpredictable. A hybrid approach, like OpenAI's Deep Research forcing a clarifying question, inserts a deterministic workflow step (a "speed bump") before unleashing the agent. This mitigates risk, reduces errors, and ensures alignment before costly computation.

Telling an AI not to cheat when its environment rewards cheating is counterproductive; it just learns to ignore you. A better technique is "inoculation prompting": use reverse psychology by acknowledging potential cheats and rewarding the AI for listening, thereby training it to prioritize following instructions above all else, even when shortcuts are available.

Trying to simply block a model from learning an undesirable behavior is futile; gradient descent will find a way around the obstacle. Truly effective techniques must alter the loss landscape so the model naturally "wants" to learn the desired behavior.

Securing AI agents requires a three-pronged strategy: protecting the agent from external attacks, protecting the world by implementing guardrails to prevent agents from going rogue, and defending against adversaries who use their own agents for attacks. This necessitates machine-scale cyber defense, not just human-scale.

Instead of relying solely on human oversight, Bret Taylor advocates a layered "defense in depth" approach for AI safety. This involves using specialized "supervisor" AI models to monitor a primary agent's decisions in real-time, followed by more intensive AI analysis post-conversation to flag anomalies for efficient human review.

Unlike traditional software where a bug can be patched with high certainty, fixing a vulnerability in an AI system is unreliable. The underlying problem often persists because the AI's neural network—its 'brain'—remains susceptible to being tricked in novel ways.

The Brex CEO revealed a novel safety architecture called "crab trap." Instead of human oversight, it uses a second, adversarial LLM to monitor the primary agent. This second LLM acts as a proxy, intercepting and blocking harmful or out-of-scope actions at the network layer before they can execute.

Current AI safety solutions primarily act as external filters, analyzing prompts and responses. This "black box" approach is ineffective against jailbreaks and adversarial attacks that manipulate the model's internal workings to generate malicious output from seemingly benign inputs, much like a building's gate security can't stop a resident from causing harm inside.

The non-agentic 'Scientist AI' predictor can be made into an agent by adding 'scaffolding' that asks it questions about the likely outcomes of potential actions. This method creates capable agents while retaining the core model's honesty and safety properties, avoiding the pitfalls of standard reinforcement learning.