The supply chain for neurons is not the main problem; they can be produced easily. The true challenge and next major milestone is "learning in vitro"—discovering the principles to program neural networks to perform consistent, desired computations like recognizing images or executing logic.

Related Insights

Digital computers have separate units for processing (CPU) and memory (RAM). In biological computation, this distinction dissolves. The strength and pattern of connections between neurons *is* the memory, and the electrical firing (spiking) across these same connections *is* the processing.

The next major AI breakthrough will come from applying generative models to complex systems beyond human language, such as biology. By treating biological processes as a unique "language," AI could discover novel therapeutics or research paths, leading to a "Move 37" moment in science.

The primary motivation for biocomputing is not just scientific curiosity; it's a direct response to the massive, unsustainable energy consumption of traditional AI. Living neurons are up to 1,000,000 times more energy-efficient, offering a path to dramatically cheaper and greener AI.

The era of guaranteed progress by simply scaling up compute and data for pre-training is ending. With massive compute now available, the bottleneck is no longer resources but fundamental ideas. The AI field is re-entering a period where novel research, not just scaling existing recipes, will drive the next breakthroughs.

Contrary to sci-fi imagery, the living neurons for biocomputing platforms are not extracted from animals. They are created from commercially available stem cells, which are originally derived from human skin. This process avoids the ethical and practical issues tied to using primary tissue.

Companies like Cortical Labs are growing human brain cells on chips to create energy-efficient biological computers. This radical approach could power future server farms and make personal 'digital twins' feasible by overcoming the massive energy demands of current supercomputers.

While today's computers cannot achieve AGI, it is not theoretically impossible. Creating a generally intelligent system will require a new physical substrate—likely biological or chemical—that can replicate the brain's enormous, dynamic configurational space, which silicon architecture cannot.

There's a qualitative difference between neurons grown in vitro from stem cells and those found in an adult brain. The scientific community discusses whether lab-grown neurons are less mature, like "infant" neurons, and may lack some receptors. The "perfect" neuron for computation is an open research question.

The founder of AI and robotics firm Medra argues that scientific progress is not limited by a lack of ideas or AI-generated hypotheses. Instead, the critical constraint is the physical capacity to test these ideas and generate high-quality data to train better AI models.

Biological intelligence has no OS or APIs; the physics of the brain *is* the computation. Unconventional AI's CEO Naveen Rao argues that current AI is inefficient because it runs on layers of abstraction. The future is hardware where intelligence is an emergent property of the system's physics.