Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Applying AI to biology isn't just a big data problem. The training data must be structured for reinforcement learning. This means it must be complete (including negative results) and allow for a feedback loop where AI predictions are tested in the lab, and the results are used to refine the model.

Related Insights

The next leap in biotech moves beyond applying AI to existing data. CZI pioneers a model where 'frontier biology' and 'frontier AI' are developed in tandem. Experiments are now designed specifically to generate novel data that will ground and improve future AI models, creating a virtuous feedback loop.

The primary bottleneck for creating powerful foundation models in biology is the lack of clean, large-scale experimental data—orders of magnitude less than what's available for LLMs. This creates a major opportunity for "data foundries" that use robotic labs to generate high-quality biological data at scale.

AI models trained on descriptive data (e.g., RNA-seq) can classify cell states but fail to predict how to transition a diseased cell to a healthy one. True progress requires generating massive "causal" datasets that show the effects of specific genetic perturbations.

The critical challenge in AI development isn't just improving a model's raw accuracy but building a system that reliably learns from its mistakes. The gap between an 85% accurate prototype and a 99% production-ready system is bridged by an infrastructure that systematically captures and recycles errors into high-quality training data.

AI models are trained on large lab-generated datasets. The models then simulate biology and make predictions, which are validated back in the lab. This feedback loop accelerates discovery by replacing random experimental "walks" with a more direct computational route, making research faster and more efficient.

Static data scraped from the web is becoming less central to AI training. The new frontier is "dynamic data," where models learn through trial-and-error in synthetic environments (like solving math problems), effectively creating their own training material via reinforcement learning.

Unlike math or code with cheap, fast rewards, clinically valuable biology problems lack easily verifiable ground truths. This makes it difficult to create the rapid reinforcement learning loops that drive explosive AI progress in other fields.

The bottleneck for AI in drug development isn't the sophistication of the models but the absence of large-scale, high-quality biological data sets. Without comprehensive data on how drugs interact within complex human systems, even the best AI models cannot make accurate predictions.

Current LLMs fail at science because they lack the ability to iterate. True scientific inquiry is a loop: form a hypothesis, conduct an experiment, analyze the result (even if incorrect), and refine. AI needs this same iterative capability with the real world to make genuine discoveries.

The founder of AI and robotics firm Medra argues that scientific progress is not limited by a lack of ideas or AI-generated hypotheses. Instead, the critical constraint is the physical capacity to test these ideas and generate high-quality data to train better AI models.

Effective Biology AI Needs Reinforcement Learning Datasets, Not Just Massive Data | RiffOn