According to Demis Hassabis, LLMs feel uncreative because they only perform pattern matching. To achieve true, extrapolative creativity like AlphaGo's famous 'Move 37,' models must be paired with a search component that actively explores new parts of the knowledge space beyond the training data.

Related Insights

A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.

LLMs shine when acting as a 'knowledge extruder'—shaping well-documented, 'in-distribution' concepts into specific code. They fail when the core task is novel problem-solving where deep thinking, not code generation, is the bottleneck. In these cases, the code is the easy part.

LLMs learn two things from pre-training: factual knowledge and intelligent algorithms (the "cognitive core"). Karpathy argues the vast memorized knowledge is a hindrance, making models rely on memory instead of reasoning. The goal should be to strip away this knowledge to create a pure, problem-solving cognitive entity.

Modern LLMs use a simple form of reinforcement learning that directly rewards successful outcomes. This contrasts with more sophisticated methods, like those in AlphaGo or the brain, which use "value functions" to estimate long-term consequences. It's a mystery why the simpler approach is so effective.

The critique that LLMs lack true creativity because they only recombine and predict existing data is challenged by the observation that human creativity, particularly in branding and marketing, often operates on the exact same principles. The process involves combining existing concepts in novel ways to feel fresh, much like an LLM.

True creative mastery emerges from an unpredictable human process. AI can generate options quickly but bypasses this journey, losing the potential for inexplicable, last-minute genius that defines truly great work. It optimizes for speed at the cost of brilliance.

Instead of giving an AI creative freedom, defining tight boundaries like word count, writing style, and even forbidden words forces the model to generate more specific, unique, and less generic content. A well-defined box produces a more creative result than an empty field.

Google DeepMind CEO Demis Hassabis argues that today's large models are insufficient for AGI. He believes progress requires reintroducing algorithmic techniques from systems like AlphaGo, specifically planning and search, to enable more robust reasoning and problem-solving capabilities beyond simple pattern matching.

Unlike humans, whose poor memory forces them to generalize and find patterns, LLMs are incredibly good at memorization. Karpathy argues this is a flaw. It distracts them with recalling specific training documents instead of focusing on the underlying, generalizable algorithms of thought, hindering true understanding.

The debate over AI's 'true' creativity is misplaced. Most human innovation isn't a singular breakthrough but a remix of prior work. Since generational geniuses are exceptionally rare, AI only needs to match the innovative capacity of the other 99.9% of humanity to be transformative.