/
© 2026 RiffOn. All rights reserved.
  1. The a16z Show
  2. Dwarkesh and Ilya Sutskever on What Comes After Scaling
Dwarkesh and Ilya Sutskever on What Comes After Scaling

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show · Dec 15, 2025

The era of scaling AI is over. Ilya Sutskever argues we're back to the age of research, needing to solve generalization to bridge the gap to AGI.

Human Emotions Act as a Robust, Evolution-Coded Value Function for Decision Making

Emotions are not superfluous but are a critical, hardcoded value function shaped by evolution. The example of a patient losing emotional capacity and becoming unable to make decisions highlights this. This suggests our 'gut feelings' are a robust system for guiding actions, a mechanism current AI lacks.

Dwarkesh and Ilya Sutskever on What Comes After Scaling thumbnail

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show·2 months ago

The Entire Problem of AGI Safety Boils Down to Managing Its Inevitable Power

The fundamental challenge of creating safe AGI is not about specific failure modes but about grappling with the immense power such a system will wield. The difficulty in truly imagining and 'feeling' this future power is a major obstacle for researchers and the public, hindering proactive safety measures. The core problem is simply 'the power.'

Dwarkesh and Ilya Sutskever on What Comes After Scaling thumbnail

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show·2 months ago

Poor Generalization is the Fundamental Flaw Holding Back Current AI Models

The central challenge for current AI is not merely sample efficiency but a more profound failure to generalize. Models generalize 'dramatically worse than people,' which is the root cause of their brittleness, inability to learn from nuanced instruction, and unreliability compared to human intelligence. Solving this is the key to the next paradigm.

Dwarkesh and Ilya Sutskever on What Comes After Scaling thumbnail

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show·2 months ago

True AGI Will Be a Fast Continual Learner, Not an Omniscient, Pre-Trained Oracle

The popular concept of AGI as a static, all-knowing entity is flawed. A more realistic and powerful model is one analogous to a 'super intelligent 15-year-old'—a system with a foundational capacity for rapid, continual learning. Deployment would involve this AI learning on the job, not arriving with complete knowledge.

Dwarkesh and Ilya Sutskever on What Comes After Scaling thumbnail

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show·2 months ago

AI Models Are Over-trained 'Competitive Programmers' Who Lack Real-World Judgment

AI models excel at specific tasks (like evals) because they are trained exhaustively on narrow datasets, akin to a student practicing 10,000 hours for a coding competition. While they become experts in that domain, they fail to develop the broader judgment and generalization skills needed for real-world success.

Dwarkesh and Ilya Sutskever on What Comes After Scaling thumbnail

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show·2 months ago

AI Models Excel on Benchmarks But Fail in Reality Due to 'Teaching to the Test'

AI models show impressive performance on evaluation benchmarks but underwhelm in real-world applications. This gap exists because researchers, focused on evals, create reinforcement learning (RL) environments that mirror test tasks. This leads to narrow intelligence that doesn't generalize, a form of human-driven reward hacking.

Dwarkesh and Ilya Sutskever on What Comes After Scaling thumbnail

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show·2 months ago

AI's 'Age of Scaling' Is Over; We're Back to the 'Age of Research'

The era of guaranteed progress by simply scaling up compute and data for pre-training is ending. With massive compute now available, the bottleneck is no longer resources but fundamental ideas. The AI field is re-entering a period where novel research, not just scaling existing recipes, will drive the next breakthroughs.

Dwarkesh and Ilya Sutskever on What Comes After Scaling thumbnail

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show·2 months ago

Evolution Mysteriously Hardcoded High-Level Social Desires, Not Just Primal Instincts

It's a profound mystery how evolution encoded high-level desires like seeking social approval. Unlike simple instincts linked to sensory input (e.g., smell), these social goals require complex brain processing to even define. The mechanism by which our genome instills a preference for such abstract concepts is unknown and represents a major gap in our understanding.

Dwarkesh and Ilya Sutskever on What Comes After Scaling thumbnail

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show·2 months ago