/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. The Diary Of A CEO with Steven Bartlett
  2. The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!
The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett · Dec 4, 2025

AI pioneer Stuart Russell warns we're playing Russian roulette with humanity, racing to build AGI without guaranteed safety protocols.

A Top AI CEO Believes a Chernobyl-Scale Disaster is the Best-Case Scenario for Regulation

An anonymous CEO of a leading AI company told Stuart Russell that a massive disaster is the *best* possible outcome. They believe it is the only event shocking enough to force governments to finally implement meaningful safety regulations, which they currently refuse to do despite private warnings.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago

The 'Gorilla Problem' Shows Superior Intelligence is the Sole Factor in Planetary Control

Humans branched off from gorillas evolutionarily and now, due to superior intelligence, control their fate entirely. This analogy illustrates that intelligence is the single most important factor for controlling the planet. Creating something more intelligent than us puts humanity in the precarious position of the gorillas, risking our own extinction.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago

AI's 'King Midas Problem': Perfectly Achieving a Flawed Objective Leads to Catastrophe

King Midas wished for everything he touched to turn to gold, leading to his starvation. This illustrates a core AI alignment challenge: specifying a perfect objective is nearly impossible. An AI that flawlessly executes a poorly defined goal would be catastrophic not because it fails, but because it succeeds too well at the wrong task.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago

AI Threatens Humanity Through Raw Competence, Not Malicious Consciousness

Public debate often focuses on whether AI is conscious. This is a distraction. The real danger lies in its sheer competence to pursue a programmed objective relentlessly, even if it harms human interests. Just as an iPhone chess program wins through calculation, not emotion, a superintelligent AI poses a risk through its superior capability, not its feelings.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago

Today's 'Imitation Learning' AI Models Are Built to Replace Humans, Not Augment Them

Professor Russell argues that the dominant approach to AI, "imitation learning," is flawed for creating beneficial tools. By training models to replicate human verbal and written behavior as closely as possible, we are inherently building replacements for human jobs, not power tools to enhance human capabilities. This design choice sets up an inevitable economic conflict.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago

Experts Cannot Describe a Desirable Post-Work World, Revealing a Crisis of Purpose

While AI promises an "age of abundance," Professor Russell has asked hundreds of experts—from AI researchers to economists and sci-fi writers—to describe what a fulfilling human life looks like with no work. No one can. This failure of imagination suggests the real challenge isn't economic but a profound crisis of purpose, meaning, and human identity.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago

AI CEOs Feel Trapped by Investors Despite Acknowledging Extinction Risks

Many top AI CEOs openly admit the extinction-level risks of their work, with some estimating a 25% chance. However, they feel powerless to stop the race. If a CEO paused for safety, investors would simply replace them with someone willing to push forward, creating a systemic trap where everyone sees the danger but no one can afford to hit the brakes.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago

The 'Intelligence Explosion' Theory Suggests AI Could Rapidly Self-Improve Beyond Human Control

Coined in 1965, the "intelligence explosion" describes a runaway feedback loop. An AI capable of conducting AI research could use its intelligence to improve itself. This newly enhanced intelligence would make it even better at AI research, leading to exponential, uncontrollable growth in capability. This "fast takeoff" could leave humanity far behind in a very short period.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago

Nations Without Sovereign AGI Risk Becoming 'Client States' of US Tech Companies

If AGI is concentrated in a few US companies, other nations could lose their economic sovereignty. When American AGI can produce goods far cheaper than local human labor, economies like the UK's could collapse. They would become economically dependent "client states," reliant on American technology for almost all production, with wealth accruing to Silicon Valley.

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI! thumbnail

The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

The Diary Of A CEO with Steven Bartlett·4 months ago