Karpathy found AI coding agents struggle with genuinely novel projects like his NanoChat repository. Their training on common internet patterns causes them to misunderstand custom implementations and try to force standard, but incorrect, solutions. They are good for autocomplete and boilerplate but not for intellectually intense, frontier work.

Related Insights

Anthropic's David Hershey states it's "deeply unsurprising" that AI is great at software engineering because the labs are filled with software engineers. This suggests AI's capabilities are skewed by its creators' expertise, and achieving similar performance in fields like law requires deeper integration with domain experts.

LLMs shine when acting as a 'knowledge extruder'—shaping well-documented, 'in-distribution' concepts into specific code. They fail when the core task is novel problem-solving where deep thinking, not code generation, is the bottleneck. In these cases, the code is the easy part.

Andrej Karpathy's 'Software 2.0' framework posits that AI automates tasks that are easily *verifiable*. This explains the 'jagged frontier' of AI progress: fields like math and code, where correctness is verifiable, advance rapidly. In contrast, creative and strategic tasks, where success is subjective and hard to verify, lag significantly behind.

Product leaders must personally engage with AI development. Direct experience reveals unique, non-human failure modes. Unlike a human developer who learns from mistakes, an AI can cheerfully and repeatedly make the same error—a critical insight for managing AI projects and team workflow.

AI platforms using the same base model (e.g., Claude) can produce vastly different results. The key differentiator is the proprietary 'agent' layer built on top, which gives the model specific tools to interact with code (read, write, edit files). A superior agent leads to superior performance.

AI agents function like junior engineers, capable of generating code that introduces bugs, security flaws, or maintenance debt. This increases the demand for senior engineers who can provide architectural oversight, review code, and prevent system degradation, making their expertise more critical than ever.

Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.

The current focus on pre-training AI with specific tool fluencies overlooks the crucial need for on-the-job, context-specific learning. Humans excel because they don't need pre-rehearsal for every task. This gap indicates AGI is further away than some believe, as true intelligence requires self-directed, continuous learning in novel environments.

AI coding assistants struggle with deep kernel work (CUDA, PTX) because there's little public code to learn from. Furthermore, debugging AI-generated parallel code is extremely difficult because the developer lacks the original mental model, making it less efficient than writing it themselves.

AI models excel at specific tasks (like evals) because they are trained exhaustively on narrow datasets, akin to a student practicing 10,000 hours for a coding competition. While they become experts in that domain, they fail to develop the broader judgment and generalization skills needed for real-world success.