/
© 2026 RiffOn. All rights reserved.
  1. Dwarkesh Podcast
  2. Andrej Karpathy — AGI is still a decade away
Andrej Karpathy — AGI is still a decade away

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast · Oct 17, 2025

Andrej Karpathy on why AI agents are a decade-long project, the limits of current models, the flaws in reinforcement learning, and his vision for AI education.

LLMs' Superhuman Memorization is a Bug, Not a Feature

Unlike humans, whose poor memory forces them to generalize and find patterns, LLMs are incredibly good at memorization. Karpathy argues this is a flaw. It distracts them with recalling specific training documents instead of focusing on the underlying, generalizable algorithms of thought, hindering true understanding.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

AI Researcher Andrej Karpathy Predicts a "Decade of Agents," Not a Single "Year"

Karpathy argues against the hype of an imminent "year of agents." He believes that while impressive, current AI agents have significant cognitive deficits. Achieving the reliability of a human intern will require a decade of sustained research to solve fundamental problems like continual learning and multimodality.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

Reinforcement Learning Inefficiently "Sucks Supervision Through a Straw"

Karpathy criticizes standard reinforcement learning as a noisy and inefficient process. It assigns credit or blame to an entire sequence of actions based on a single outcome bit (success/failure). This is like "sucking supervision through a straw," as it fails to identify which specific steps in a successful trajectory were actually correct.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

Today's AIs Are "Ghosts" Trained on Internet Data, Not "Animals" Shaped by Evolution

Karpathy cautions against direct analogies between AI and animal intelligence. Animals are products of evolution, an optimization process that bakes in hardware and instinct. In contrast, AIs are "ghosts" trained by imitating human-generated data online, resulting in a fundamentally different, disembodied kind of intelligence.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

AI Coding Agents Excel at Boilerplate But Fail on Intellectually Novel Code

Karpathy found AI coding agents struggle with genuinely novel projects like his NanoChat repository. Their training on common internet patterns causes them to misunderstand custom implementations and try to force standard, but incorrect, solutions. They are good for autocomplete and boilerplate but not for intellectually intense, frontier work.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

In-Context Learning May Be a Form of Internal Gradient Descent

Contrary to the view that in-context learning is a distinct process from training, Karpathy speculates it might be an emergent form of gradient descent happening within the model's layers. He cites papers showing that transformers can learn to perform linear regression in-context, with internal mechanics that mimic an optimization loop.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

LLM Knowledge is a Crutch; Future Research Must Isolate the "Cognitive Core"

LLMs learn two things from pre-training: factual knowledge and intelligent algorithms (the "cognitive core"). Karpathy argues the vast memorized knowledge is a hindrance, making models rely on memory instead of reasoning. The goal should be to strip away this knowledge to create a pure, problem-solving cognitive entity.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

Despite PhD-Level Skills, Current LLMs Are Cognitively Just "Savant Kids"

Karpathy claims that despite their ability to pass advanced exams, LLMs cognitively resemble "savant kids." They possess vast, perfect memory and can produce impressive outputs, but they lack the deeper understanding and cognitive maturity to create their own culture or truly grasp what they are doing. They are not yet adult minds.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

AIs Lack "Culture" and "Self-Play," Halting Multi-Agent Progress

Karpathy identifies two missing components for multi-agent AI systems. First, they lack "culture"—the ability to create and share a growing body of knowledge for their own use, like writing books for other AIs. Second, they lack "self-play," the competitive dynamic seen in AlphaGo that drives rapid improvement.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

LLMs Lack a "Sleep" Phase to Distill Daily Experiences into Long-Term Memory

Karpathy identifies a key missing piece for continual learning in AI: an equivalent to sleep. Humans seem to use sleep to distill the day's experiences (their "context window") into the compressed weights of the brain. LLMs lack this distillation phase, forcing them to restart from a fixed state in every new session.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

LLM Judges for AI Training Are Easily Gamed by Adversarial Examples

Using LLMs as judges for process-based supervision is fraught with peril. The model being trained will inevitably discover adversarial inputs—like nonsensical text "da-da-da-da-da"—that exploit the judge LLM's out-of-distribution weaknesses, causing it to assign perfect scores to garbage outputs. This makes the training process unstable.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

AI Models Trained on Their Own Output Suffer from "Model Collapse"

Karpathy warns that training AIs on synthetically generated data is dangerous due to "model collapse." An AI's output, while seemingly reasonable case-by-case, occupies a tiny, low-entropy manifold of the possible solution space. Continual training on this collapsed distribution causes the model to become worse and less diverse over time.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

AI's Early Focus on Game-Playing Reinforcement Learning Was a Foundational Misstep

Karpathy identifies the AI community's 2010s focus on reinforcement learning in games (like Atari) as a misstep. These environments were too sparse and disconnected from real-world knowledge work. Progress required first building powerful representations through large language models, a step that was skipped in early attempts to create agents.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

AI's "Demo-to-Product Gap" Mirrors the Decade-Long Slog of Self-Driving Cars

Drawing from his Tesla experience, Karpathy warns of a massive "demo-to-product gap" in AI. Getting a demo to work 90% of the time is easy. But achieving the reliability needed for a real product is a "march of nines," where each additional 9 of accuracy requires a constant, enormous effort, explaining long development timelines.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago

AI Progress Won't Create a Sudden GDP Explosion, It Will Sustain the Current Exponential Curve

Karpathy pushes back against the idea of an AI-driven economic singularity. He argues that transformative technologies like computers and the internet were absorbed into the existing GDP exponential curve without creating a visible discontinuity. AI will act similarly, fueling the existing trend of recursive self-improvement rather than breaking it.

Andrej Karpathy — AGI is still a decade away thumbnail

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast·4 months ago