Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The primary obstacle to creating sophisticated AI models of cells isn't the AI itself, but the data. Existing datasets often perturb only one cellular variable at a time, failing to capture the complex interactions that arise from simultaneous changes. New platforms are needed to generate this multi-dimensional data.

Related Insights

AI isn't just for designing RNA sequences. Its real value is in creating predictive models of complex cellular functions. This allows scientists to determine the precise set of instructions (RNAs) needed to make a cell perform a complex series of tasks, like targeting a brain tumor.

The bottleneck for AI in drug discovery is not the algorithm but the lack of high-quality, large-scale biological data. New platforms are needed to generate this necessary "substrate" for AI models to learn from, challenging the narrative that better models alone are the solution.

The primary bottleneck for creating powerful foundation models in biology is the lack of clean, large-scale experimental data—orders of magnitude less than what's available for LLMs. This creates a major opportunity for "data foundries" that use robotic labs to generate high-quality biological data at scale.

AI models trained on descriptive data (e.g., RNA-seq) can classify cell states but fail to predict how to transition a diseased cell to a healthy one. True progress requires generating massive "causal" datasets that show the effects of specific genetic perturbations.

The progress of AI in predicting cancer treatment is stalled not by algorithms, but by the data used to train them. Relying solely on static genetic data is insufficient. The critical missing piece is functional, contextual data showing how patient cells actually respond to drugs.

The next frontier in preclinical research involves feeding multi-omics and spatial data from complex 3D cell models into AI algorithms. This synergy will enable a crucial shift from merely observing biological phenomena to accurately predicting therapeutic outcomes and patient responses.

To truly understand biological systems, data scale is less important than data quality. The most informative data comes from capturing the dynamic interactions of a system *while* it's being perturbed (e.g., by a drug), not from static snapshots of a system at rest.

The bottleneck for AI in drug development isn't the sophistication of the models but the absence of large-scale, high-quality biological data sets. Without comprehensive data on how drugs interact within complex human systems, even the best AI models cannot make accurate predictions.

While petabytes of observational DNA sequence data exist, it's insufficient for the next wave of AI. The key to creating powerful, functional models is generating causal data—from experiments that systematically test function—which is a current data bottleneck.

The founder of AI and robotics firm Medra argues that scientific progress is not limited by a lack of ideas or AI-generated hypotheses. Instead, the critical constraint is the physical capacity to test these ideas and generate high-quality data to train better AI models.