We scan new podcasts and send you the top 5 insights daily.
Meta's Tribe V2 is a foundation model trained on over 500 hours of fMRI data. It creates a "digital twin" of neural activity to predict brain responses to sights and sounds, raising questions about its application by a social media company.
The performance ceiling for non-invasive Brain-Computer Interfaces (BCIs) is rising dramatically, not from better sensors, but from advanced AI. New models can extract high-fidelity signals from noisy data collected outside the skull, potentially making surgical implants like Neuralink unnecessary for sophisticated use cases.
Meta's huge AI capex, despite no hit product yet, is based on proprietary data from its massive platform. Unlike the speculative Metaverse venture, this investment is a direct response to observed exponential growth in user engagement with AI content, even if users publicly claim to dislike it.
The strategic purpose of engaging AI companion apps is not merely user retention but to create a "gold mine" of human interaction data. This data serves as essential fuel for the larger race among tech giants to build more powerful Artificial General Intelligence (AGI) models.
Marketing analytics firm Alembic uses spiking neural networks, a digital twin of the human brain, for attribution. Unlike predictive models needing vast historical data, these networks can identify the impact of a rare event (like the Olympics) by detecting pattern changes in real-time, similar to how a child learns "dog" after seeing one once.
Companies like Character.ai aren't just building engaging products; they're creating social engineering mechanisms to extract vast amounts of human interaction data. This data is a critical resource, like a goldmine, used to train larger, more powerful models in the race toward AGI.
Paradromics uses LLMs to decode brain signals for speech, much like how speech-to-text cleans up audio. This allows for faster, more accurate "thought-to-text" by predicting what a user intends to say, even with imperfect neural data, and correcting errors in real-time.
Instead of generating data for human analysis, Mark Zuckerberg advocates a new approach: scientists should prioritize creating novel tools and experiments specifically to generate data that will train and improve AI models. The goal shifts from direct human insight to creating smarter AI that makes novel discoveries.
Meta's multi-billion dollar super intelligence lab is struggling, with its open-source strategy deemed a failure due to high costs. The company's success now hinges on integrating "good enough" AI into products like smart glasses, rather than competing to build the absolute best model.
A novel training method involves adding an auxiliary task for AI models: predicting the neural activity of a human observing the same data. This "brain-augmented" learning could force the model to adopt more human-like internal representations, improving generalization and alignment beyond what simple labels can provide.
A neuroscientist-led startup is growing live neurons on electrodes not just for compute efficiency, but as a platform to discover novel algorithms. By studying how biological networks process information, they identify neuroscience principles that can be used as software plugins to improve current AI models and find successors to the transformer architecture.