Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Songyee Yoon applied her Ph.D. in computational neuroscience not to study the brain with computers, but to use principles of human perception to build better technology, such as more effective signal processing algorithms and user interfaces.

Related Insights

The performance ceiling for non-invasive Brain-Computer Interfaces (BCIs) is rising dramatically, not from better sensors, but from advanced AI. New models can extract high-fidelity signals from noisy data collected outside the skull, potentially making surgical implants like Neuralink unnecessary for sophisticated use cases.

Companies are now growing human brain cells on silicon chips and offering cloud API access for developers to code to them. This bio-compute model, which taught neurons to play a video game in a week, is vastly more energy-efficient than traditional GPU clusters, heralding a new computing paradigm.

Today's AI, particularly neural networks, stems from a long tradition in cognitive science where psychologists used mathematical models to understand human thought. Key advances in neural nets were made by researchers trying to replicate how human minds work, not just build intelligent machines.

The primary motivation for biocomputing is not just scientific curiosity; it's a direct response to the massive, unsustainable energy consumption of traditional AI. Living neurons are up to 1,000,000 times more energy-efficient, offering a path to dramatically cheaper and greener AI.

Dr. Levin argues that neuroscience's true subject is the architectural principles of "cognitive glue"—how simple components combine to form larger-scale minds. He believes this process is not unique to neurons and that the field's current focus is too narrow, missing applications in cellular biology, AI, and beyond.

Drawing a parallel to the Cambrian Explosion, where vision evolved alongside nervous systems, Dr. Li argues that perception's primary purpose is to enable action and interaction. This principle suggests that for AI to advance, particularly in robotics, computer vision must be developed as the foundation for embodied intelligence, not just for classification.

The supply chain for neurons is not the main problem; they can be produced easily. The true challenge and next major milestone is "learning in vitro"—discovering the principles to program neural networks to perform consistent, desired computations like recognizing images or executing logic.

The idea for a living computer came not from biologists, but from engineers with backgrounds in signal processing. This highlights how breakthrough innovations often occur at the intersection of disciplines, where outsiders can reframe a problem from a fresh perspective.

The development of neural networks wasn't a linear path. It involved a cycle where computer scientists and psychologists alternately abandoned and revived the concept. When one discipline hit a wall or lost interest, researchers in the other field would pick it up, solve a key problem, and reignite progress.

A neuroscientist-led startup is growing live neurons on electrodes not just for compute efficiency, but as a platform to discover novel algorithms. By studying how biological networks process information, they identify neuroscience principles that can be used as software plugins to improve current AI models and find successors to the transformer architecture.