Hassabis argues AGI isn't just about solving existing problems. True AGI must demonstrate the capacity for breakthrough creativity, like Einstein developing a new theory of physics or Picasso creating a new art genre. This sets a much higher bar than current systems.
A consortium including leaders from Google and DeepMind has defined AGI as matching the cognitive versatility of a "well-educated adult" across 10 domains. This new framework moves beyond abstract debate, showing a concrete 30-point leap in AGI score from GPT-4 (27%) to a projected GPT-5 (57%).
The popular conception of AGI as a pre-trained system that knows everything is flawed. A more realistic and powerful goal is an AI with a human-like ability for continual learning. This system wouldn't be deployed as a finished product, but as a 'super-intelligent 15-year-old' that learns and adapts to specific roles.
Demis Hassabis argues against an LLM-only path to AGI, citing DeepMind's successes like AlphaGo and AlphaFold as evidence. He advocates for "hybrid systems" (or neurosymbolics) that combine neural networks with other techniques like search or evolutionary methods to discover truly new knowledge, not just remix existing data.
Demis Hassabis explains that current AI models have 'jagged intelligence'—performing at a PhD level on some tasks but failing at high-school level logic on others. He identifies this lack of consistency as a primary obstacle to achieving true Artificial General Intelligence (AGI).
Moving away from abstract definitions, Sequoia Capital's Pat Grady and Sonia Huang propose a functional definition of AGI: the ability to figure things out. This involves combining baseline knowledge (pre-training) with reasoning and the capacity to iterate over long horizons to solve a problem without a predefined script, as seen in emerging coding agents.
Demis Hassabis, CEO of Google DeepMind, warns that the societal transition to AGI will be immensely disruptive, happening at a scale and speed ten times greater than the Industrial Revolution. This suggests that historical parallels are inadequate for planning and preparation.
Google DeepMind CEO Demis Hassabis argues that today's large models are insufficient for AGI. He believes progress requires reintroducing algorithmic techniques from systems like AlphaGo, specifically planning and search, to enable more robust reasoning and problem-solving capabilities beyond simple pattern matching.
Shane Legg proposes "Minimal AGI" is achieved when an AI can perform the cognitive tasks a typical person can. It's not about matching Einstein, but about no longer failing at tasks we'd expect an average human to complete. This sets a more concrete and achievable initial benchmark for the field.
Demis Hassabis argues that current LLMs are limited by their "goldfish brain"—they can't permanently learn from new interactions. He identifies solving this "continual learning" problem, where the model itself evolves over time, as one of the critical innovations needed to move from current systems to true AGI.
Demis Hassabis sees video generation as more than a content tool; it's a step toward building AI with "world models." By learning to generate realistic scenes, these models develop an intuitive understanding of physics and causality, a foundational capability for AGI to perform long-term planning in the real world.