Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

As part of an art project, Mykhailo Marynenko used an EEG helmet on performers to capture their visual cortex activity in real-time. An AI model then translated these brain signals into images, projecting the artist's imagination onto a stage for the audience to witness during the performance.

Related Insights

The performance ceiling for non-invasive Brain-Computer Interfaces (BCIs) is rising dramatically, not from better sensors, but from advanced AI. New models can extract high-fidelity signals from noisy data collected outside the skull, potentially making surgical implants like Neuralink unnecessary for sophisticated use cases.

Meta's Tribe V2 is a foundation model trained on over 500 hours of fMRI data. It creates a "digital twin" of neural activity to predict brain responses to sights and sounds, raising questions about its application by a social media company.

The next frontier for Neuralink is "blindsight," restoring vision by stimulating the brain. The primary design challenge isn't just technical; it's creating a useful visual representation with very few "pixels" of neural stimulation. The problem is akin to designing a legible, life-like image using Atari-level graphics.

By analyzing crowd behavior with sensors at music events, Mykhailo's team used generative AI to dynamically create music targeting disengaged attendees. This covertly boosted overall crowd engagement from approximately 60% to nearly 90%, demonstrating a powerful application for modulating group emotion and attention.

Paradromics uses LLMs to decode brain signals for speech, much like how speech-to-text cleans up audio. This allows for faster, more accurate "thought-to-text" by predicting what a user intends to say, even with imperfect neural data, and correcting errors in real-time.

We don't perceive reality directly; our brain constructs a predictive model, filling in gaps and warping sensory input to help us act. Augmented reality isn't a tech fad but an intuitive evolution of this biological process, superimposing new data onto our brain's existing "controlled model" of the world.

When we observe neurons, we are not seeing the true substrate of thought. Instead, we are seeing our 'headset's' symbolic representation of the complex conscious agent dynamics that are responsible for creating our interface in the first place.

The process of an AI like Stable Diffusion creating a coherent image by finding patterns within a vast possibility space of random noise serves as a powerful analogy. It illustrates how consciousness might render a structured reality by selecting and solidifying possibilities from an infinite field of potential experiences.

A novel training method involves adding an auxiliary task for AI models: predicting the neural activity of a human observing the same data. This "brain-augmented" learning could force the model to adopt more human-like internal representations, improving generalization and alignment beyond what simple labels can provide.

A neuroscientist-led startup is growing live neurons on electrodes not just for compute efficiency, but as a platform to discover novel algorithms. By studying how biological networks process information, they identify neuroscience principles that can be used as software plugins to improve current AI models and find successors to the transformer architecture.