Single-cell brain atlases reveal that subcortical "steering" regions have a vastly greater diversity of cell types than the more uniform cortex. This supports the idea that our innate drives and reflexes are encoded in complex, genetically pre-wired circuits, while the cortex is a more general-purpose learning architecture.
LLMs predict the next token in a sequence. The brain's cortex may function as a general prediction engine capable of "omnidirectional inference"—predicting any missing information from any available subset of inputs, not just what comes next. This offers a more flexible and powerful form of reasoning.
Our perception of sensing then reacting is an illusion. The brain constantly predicts the next moment based on past experiences, preparing actions before sensory information fully arrives. This predictive process is far more efficient than constantly reacting to the world from scratch, meaning we act first, then sense.
The neural systems evolved for physical survival—managing pain, fear, and strategic threats—are the same ones activated during modern stressors like workplace arguments or relationship conflicts. The challenges have changed from starvation to spreadsheets, but the underlying brain hardware hasn't.
The small size of the human genome is a puzzle. The solution may be that evolution doesn't store a large "pre-trained model." Instead, it uses the limited genomic space to encode a complex set of reward and loss functions, which is a far more compact way to guide a powerful learning algorithm.
It is a profound mystery how evolution hardcodes abstract social desires (e.g., reputation) into our genome. Unlike simple sensory rewards, these require complex cognitive processing to even identify. Solving this could unlock powerful new methods for instilling robust, high-level values in AI systems.
The debate over whether "true" AGI will be a monolithic model or use external scaffolding is misguided. Our only existing proof of general intelligence—the human brain—is a complex, scaffolded system with specialized components. This suggests scaffolding is not a crutch for AI, but a natural feature of advanced intelligence.
Andre Karpathy argues that comparing AI to animal learning is flawed because animal brains possess powerful initializations encoded in DNA via evolution. This allows complex behaviors almost instantly (e.g., a newborn zebra running), which contradicts the 'tabula rasa' or 'blank slate' approach of many AI models.
The brain connects abstract, learned concepts (like social status) to innate rewards (like shame or pride) via a "steering subsystem." The cortex learns to predict the responses of this more primitive system, effectively linking new knowledge to hardwired emotional and motivational circuits.
It's a profound mystery how evolution encoded high-level desires like seeking social approval. Unlike simple instincts linked to sensory input (e.g., smell), these social goals require complex brain processing to even define. The mechanism by which our genome instills a preference for such abstract concepts is unknown and represents a major gap in our understanding.
AI models use simple, mathematically clean loss functions. The human brain's superior learning efficiency might stem from evolution hard-coding numerous, complex, and context-specific loss functions that activate at different developmental stages, creating a sophisticated learning curriculum.