The "temporal difference" algorithm, which tracks changing expectations, isn't just a theoretical model. It is biologically installed in brains via dopamine. This same algorithm was externalized by DeepMind to create a world-champion Go-playing AI, representing a unique instance of biology directly inspiring a major technological breakthrough.

Related Insights

DeepMind's core breakthrough was treating AI like a child, not a machine. Instead of programming complex strategies, they taught it to master tasks through simple games like Pong, giving it only one rule ('score go up is good') and allowing it to learn for itself through trial and error.

The brain's hardware limitations, like slow and stochastic neurons, may actually be advantages. These properties seem perfectly suited for probabilistic inference algorithms that rely on sampling—a task that requires explicit, computationally-intensive random number generation in digital systems. Hardware and algorithm are likely co-designed.

Modern LLMs use a simple form of reinforcement learning that directly rewards successful outcomes. This contrasts with more sophisticated methods, like those in AlphaGo or the brain, which use "value functions" to estimate long-term consequences. It's a mystery why the simpler approach is so effective.

In domains like coding and math where correctness is automatically verifiable, AI can move beyond imitating humans (RLHF). Using pure reinforcement learning, or "experiential learning," models learn via self-play and can discover novel, superhuman strategies similar to AlphaGo's Move 37.

In humans, learning a new skill is a highly conscious process that becomes unconscious once mastered. This suggests a link between learning and consciousness. The error signals and reward functions in machine learning could be computational analogues to the valenced experiences (pain/pleasure) that drive biological learning.

The transition from supervised learning (copying internet text) to reinforcement learning (rewarding a model for achieving a goal) marks a fundamental breakthrough. This method, used in Anthropic's Opus 3 model, allows AI to develop novel problem-solving capabilities beyond simple data emulation.

Google DeepMind CEO Demis Hassabis argues that today's large models are insufficient for AGI. He believes progress requires reintroducing algorithmic techniques from systems like AlphaGo, specifically planning and search, to enable more robust reasoning and problem-solving capabilities beyond simple pattern matching.

Most believe dopamine spikes with rewards. In reality, it continuously tracks the difference between your current and next expectation, even without a final outcome. This "temporal difference error" is the brain's core learning mechanism, mirroring algorithms in advanced AI, which constantly updates your behavior as you move through the world.

Traditional science failed to create equations for complex biological systems because biology is too "bespoke." AI succeeds by discerning patterns from vast datasets, effectively serving as the "language" for modeling biology, much like mathematics is the language of physics.

AI models use simple, mathematically clean loss functions. The human brain's superior learning efficiency might stem from evolution hard-coding numerous, complex, and context-specific loss functions that activate at different developmental stages, creating a sophisticated learning curriculum.