We scan new podcasts and send you the top 5 insights daily.
Unlike general AI which leverages vast, existing datasets, Noetik believes progress in biology requires designing and generating specific, high-quality data with foresight into the models that will be trained. They compare this to the intentional, decades-long creation of the PDB dataset for protein folding.
The bottleneck for AI in drug discovery is not the algorithm but the lack of high-quality, large-scale biological data. New platforms are needed to generate this necessary "substrate" for AI models to learn from, challenging the narrative that better models alone are the solution.
The next leap in biotech moves beyond applying AI to existing data. CZI pioneers a model where 'frontier biology' and 'frontier AI' are developed in tandem. Experiments are now designed specifically to generate novel data that will ground and improve future AI models, creating a virtuous feedback loop.
The primary bottleneck for creating powerful foundation models in biology is the lack of clean, large-scale experimental data—orders of magnitude less than what's available for LLMs. This creates a major opportunity for "data foundries" that use robotic labs to generate high-quality biological data at scale.
To break the data bottleneck in AI protein engineering, companies now generate massive synthetic datasets. By creating novel "synthetic epitopes" and measuring their binding, they can produce thousands of validated positive and negative training examples in a single experiment, massively accelerating model development.
Xaira's core strategy involves creating massive, proprietary datasets that reveal causal biology. By systematically perturbing every gene in a cell to observe its effects, they generate unique training data for their models, quadrupling the world's supply of such information with a single publication.
The future of valuable AI lies not in models trained on the abundant public internet, but in those built on scarce, proprietary data. For fields like robotics and biology, this data doesn't exist to be scraped; it must be actively created, making the data generation process itself the key competitive moat.
Demonstrating extreme conviction, Noetik invested a year and a half in lab setup, tumor sourcing, and data processing before having a dataset large enough to train its first models. This highlights the immense upfront investment and risk required for a data-first approach in bio-AI, where no off-the-shelf data exists.
Applying AI to biology isn't just a big data problem. The training data must be structured for reinforcement learning. This means it must be complete (including negative results) and allow for a feedback loop where AI predictions are tested in the lab, and the results are used to refine the model.
While petabytes of observational DNA sequence data exist, it's insufficient for the next wave of AI. The key to creating powerful, functional models is generating causal data—from experiments that systematically test function—which is a current data bottleneck.
CZI's strategy creates a "frontier biology lab" to co-develop advanced data collection techniques alongside its "frontier AI lab." This integrated approach ensures biological data is generated specifically to train and ground next-generation AI models, moving beyond using whatever data happens to be available.