The cortex has a uniform six-layer structure and algorithm throughout. Whether it becomes visual or auditory cortex depends entirely on the sensory information plugged into it, demonstrating its remarkable flexibility and general-purpose nature, much like a universal computer chip.
LLMs predict the next token in a sequence. The brain's cortex may function as a general prediction engine capable of "omnidirectional inference"—predicting any missing information from any available subset of inputs, not just what comes next. This offers a more flexible and powerful form of reasoning.
The human brain contains more potential connections than there are atoms in the universe. This immense, dynamic 'configurational space' is the source of its power, not raw processing speed. Silicon chips are fundamentally different and cannot replicate this morphing, high-dimensional architecture.
The brain's hardware limitations, like slow and stochastic neurons, may actually be advantages. These properties seem perfectly suited for probabilistic inference algorithms that rely on sampling—a task that requires explicit, computationally-intensive random number generation in digital systems. Hardware and algorithm are likely co-designed.
Unlike other species, humans are born with "half-baked" brains that wire themselves based on the culture, language, and knowledge accumulated by all previous generations. This cumulative learning, not just individual experience, is the key to our rapid advancement as a species.
Single-cell brain atlases reveal that subcortical "steering" regions have a vastly greater diversity of cell types than the more uniform cortex. This supports the idea that our innate drives and reflexes are encoded in complex, genetically pre-wired circuits, while the cortex is a more general-purpose learning architecture.
The debate over whether "true" AGI will be a monolithic model or use external scaffolding is misguided. Our only existing proof of general intelligence—the human brain—is a complex, scaffolded system with specialized components. This suggests scaffolding is not a crutch for AI, but a natural feature of advanced intelligence.
With 10x more neurons going to the eye than from it, the brain actively predicts reality and uses sensory input primarily to correct errors. This explains phantom sensations, like feeling a stair that isn't there, where the brain's simulation briefly overrides sensory fact.
A "frontier interface" is one where the interaction model is completely unknown. Historically, from light pens to cursors to multi-touch, the physical input mechanism has dictated the entire scope of what a computer can do. Brain-computer interfaces represent the next fundamental shift, moving beyond physical manipulation.
Just as a blind person's visual cortex is repurposed for heightened hearing and touch, savantism might be an extreme case of this principle. An individual may develop superhuman skills by allocating a disproportionate amount of neural resources to one area, often at the cost of others like social skills.
AI models use simple, mathematically clean loss functions. The human brain's superior learning efficiency might stem from evolution hard-coding numerous, complex, and context-specific loss functions that activate at different developmental stages, creating a sophisticated learning curriculum.