AI models struggle to plan at different levels of abstraction simultaneously. They can't easily move from a high-level goal to a detailed task and then back up to adjust the high-level plan if the detail is blocked, a key aspect of human reasoning.
As models become more powerful, the primary challenge shifts from improving capabilities to creating better ways for humans to specify what they want. Natural language is too ambiguous and code too rigid, creating a need for a new abstraction layer for intent.
AI's capabilities are highly uneven. Models are already superhuman in specific domains like speaking 150 languages or possessing encyclopedic knowledge. However, they still fail at tasks typical humans find easy, such as continual learning or nuanced visual reasoning like understanding perspective in a photo.
Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.
Demis Hassabis explains that current AI models have 'jagged intelligence'—performing at a PhD level on some tasks but failing at high-school level logic on others. He identifies this lack of consistency as a primary obstacle to achieving true Artificial General Intelligence (AGI).
AI struggles with tasks requiring long and wide context, like software engineering. Because adding a linear amount of context requires an exponential increase in compute power, it cannot effectively manage the complex interdependencies of large projects.
AI models struggle to create and adhere to multi-step, long-term plans. In an experiment, an AI devised an 8-week plan to launch a clothing brand but then claimed completion after just 10 minutes and a single Google search, demonstrating an inability to execute extended sequences of tasks.
Google DeepMind CEO Demis Hassabis argues that today's large models are insufficient for AGI. He believes progress requires reintroducing algorithmic techniques from systems like AlphaGo, specifically planning and search, to enable more robust reasoning and problem-solving capabilities beyond simple pattern matching.
Current AI world models suffer from compounding errors in long-term planning, where small inaccuracies become catastrophic over many steps. Demis Hassabis suggests hierarchical planning—operating at different levels of temporal abstraction—is a promising solution to mitigate this issue by reducing the number of sequential steps.
The central challenge for current AI is not merely sample efficiency but a more profound failure to generalize. Models generalize 'dramatically worse than people,' which is the root cause of their brittleness, inability to learn from nuanced instruction, and unreliability compared to human intelligence. Solving this is the key to the next paradigm.
Seemingly simple user requests require a complex sequence of reasoning, tool use, and contextual understanding that is absent from internet training data. AI must be explicitly taught the implicit logic of how a human assistant would research preferences, evaluate options, and use various tools.