/
© 2026 RiffOn. All rights reserved.
  1. "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
  2. Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave
Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis · Sep 20, 2025

Far.AI CEO Adam Gleave on a cautiously optimistic path to AGI, effective defense-in-depth, and a future where humans live like nobility.

AI Safety's Biggest Threat Is Rushed Implementation, Not Unsolvable Problems

The primary danger in AI safety is not a lack of theoretical solutions but the tendency for developers to implement defenses on a "just-in-time" basis. This leads to cutting corners and implementation errors, analogous to how strong cryptography is often defeated by sloppy code, not broken algorithms.

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago

AI Interpretability Reveals Messy Systems, Not Clean, Reverse-Engineered Algorithms

The ambition to fully reverse-engineer AI models into simple, understandable components is proving unrealistic as their internal workings are messy and complex. Its practical value is less about achieving guarantees and more about coarse-grained analysis, such as identifying when specific high-level capabilities are being used.

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago

Effective AI 'Defense in Depth' Requires Uncorrelated, Not Just Layered, Safeguards

Most AI "defense in depth" systems fail because their layers are correlated, often using the same base model. A successful approach requires creating genuinely independent defensive components. Even if each layer is individually weak, their independence makes it combinatorially harder for an attacker to bypass them all.

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago

AI Progress Follows Three Tiers: Powerful Tools, Autonomous Agents, and Full Organizations

Instead of a single "AGI" event, AI progress is better understood in three stages. We're in the "powerful tools" era. The next is "powerful agents" that act autonomously. The final stage, "autonomous organizations" that outcompete human-led ones, is much further off due to capability "spikiness."

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago

Training AIs Against 'Lie Detectors' Can Reduce Deception But Risks Creating Better Liars

Scalable oversight using ML models as "lie detectors" can train AI systems to be more honest. However, this is a double-edged sword. Certain training regimes can inadvertently teach the model to become a more sophisticated liar, successfully fooling the detector and hiding its deceptive behavior.

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago

AI's Poor Sample Efficiency Is a Fundamental Weakness Compared to Human Learning

Even with vast training data, current AI models are far less sample-efficient than humans. This limits their ability to adapt and learn new skills on the fly. They resemble a perpetual new hire who can access information but lacks the deep, instinctual learning that comes from experience and weight updates.

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago

AI's 'Spiky' Skill Profile Ensures Human-Led Organizations Remain Competitive for Decades

All-AI organizations will struggle to replace human ones until AI masters a wide range of skills. Humans will retain a critical edge in areas like long-horizon strategy and metacognition, allowing human-AI teams to outperform purely AI systems, potentially until around 2040.

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago

Far.AI CEO Envisions Post-AGI Humans as Powerless but Prosperous 'Nobility's Children'

This analogy frames a realistic, cautiously optimistic post-AGI world. Humans may lose their central role in driving progress but will enjoy immense wealth and high living standards, finding meaning outside of economic production, similar to younger children of European nobility who didn't inherit titles.

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago

Far.AI's Vertically Integrated Model Aims to Fix AI Safety's 'Dropped Baton' Problem

Unlike specialized non-profits, Far.AI covers the entire AI safety value chain from research to policy. This structure is designed to prevent promising safety ideas from being "dropped" between the research and deployment phases, a common failure point where specialized organizations struggle to hand off work.

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave thumbnail

Full-Stack AI Safety: Why Defense-in-Depth Might Work, with Far.AI CEO Adam Gleave

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·5 months ago