/
© 2026 RiffOn. All rights reserved.
  1. Google DeepMind: The Podcast
  2. The Arrival of AGI with Shane Legg (co-founder of DeepMind)
The Arrival of AGI with Shane Legg (co-founder of DeepMind)

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast · Dec 11, 2025

DeepMind's Shane Legg explains AGI is near (50% by 2028). Society must urgently prepare for massive transformation & superintelligence.

Today's AI Models Are Simultaneously Superhuman and Subhuman

AI's capabilities are highly uneven. Models are already superhuman in specific domains like speaking 150 languages or possessing encyclopedic knowledge. However, they still fail at tasks typical humans find easy, such as continual learning or nuanced visual reasoning like understanding perspective in a photo.

The Arrival of AGI with Shane Legg (co-founder of DeepMind) thumbnail

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast·2 months ago

Superintelligence Is Inevitable Due to Silicon's Physical Advantages Over Brains

DeepMind's Shane Legg argues that human intelligence is not the upper limit because the brain is constrained by biology (20-watt power, slow electrochemical signals). Data centers have orders of magnitude advantages in power, bandwidth, and signal speed, making superhuman AI a physical certainty.

The Arrival of AGI with Shane Legg (co-founder of DeepMind) thumbnail

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast·2 months ago

AI Progress Requires Algorithmic Shifts, Not Just More Data and Scale

Solving key AI weaknesses like continual learning or robust reasoning isn't just a matter of bigger models or more data. Shane Legg argues it requires fundamental algorithmic and architectural changes, such as building new processes for integrating information over time, akin to an episodic memory.

The Arrival of AGI with Shane Legg (co-founder of DeepMind) thumbnail

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast·2 months ago

Google DeepMind Cofounder Defines AGI as Matching Typical, Not Peak, Human Cognition

Shane Legg proposes "Minimal AGI" is achieved when an AI can perform the cognitive tasks a typical person can. It's not about matching Einstein, but about no longer failing at tasks we'd expect an average human to complete. This sets a more concrete and achievable initial benchmark for the field.

The Arrival of AGI with Shane Legg (co-founder of DeepMind) thumbnail

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast·2 months ago

Elite "Laptop Jobs" Are More Vulnerable to AI Than Physical Trades Like Plumbing

Contrary to popular belief, highly compensated cognitive work (lawyers, software engineers, financiers) is the most exposed to AI disruption. If a job can be done remotely with just a laptop, an advanced AI can likely operate in that same space. Physical jobs requiring robotics will be protected for longer due to cost and complexity.

The Arrival of AGI with Shane Legg (co-founder of DeepMind) thumbnail

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast·2 months ago

AI Safety Should Model Kahneman's "System 2" to Reason Ethically, Not Just React

Instead of relying on instinctual "System 1" rules, advanced AI should use deliberative "System 2" reasoning. By analyzing consequences and applying ethical frameworks—a process called "chain of thought monitoring"—AIs could potentially become more consistently ethical than humans who are prone to gut reactions.

The Arrival of AGI with Shane Legg (co-founder of DeepMind) thumbnail

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast·2 months ago

The General Public Grasps AI's Transformative Power Better Than Domain Experts

Shane Legg observes that non-technical people often recognize AI's general intelligence because it already surpasses them in many areas. In contrast, experts in specific fields tend to believe their domain is too unique to be impacted, underestimating the technology's rapid, exponential progress while clinging to outdated experiences.

The Arrival of AGI with Shane Legg (co-founder of DeepMind) thumbnail

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast·2 months ago

An AGI Should Be Certified Through Adversarial "Red Teaming," Not Just Standardized Tests

Shane Legg suggests a two-phase test for "Minimal AGI." First, it must pass a broad suite of tasks that typical humans can do. Second, an adversarial team gets months to probe the AI, looking for any cognitive task a typical person can do that the AI cannot. If they fail to find one, the AI passes.

The Arrival of AGI with Shane Legg (co-founder of DeepMind) thumbnail

The Arrival of AGI with Shane Legg (co-founder of DeepMind)

Google DeepMind: The Podcast·2 months ago