Unlike math or code with cheap, fast rewards, clinically valuable biology problems lack easily verifiable ground truths. This makes it difficult to create the rapid reinforcement learning loops that drive explosive AI progress in other fields.
AI excels where success is quantifiable (e.g., code generation). Its greatest challenge lies in subjective domains like mental health or education. Progress requires a messy, societal conversation to define 'success,' not just a developer-built technical leaderboard.
While AI promises to design therapeutics computationally, it doesn't eliminate the need for physical lab work. Even if future models require no training data, their predicted outputs must be experimentally validated. This ensures a continuous, inescapable cycle where high-throughput data generation remains critical for progress.
In high-stakes fields like pharma, AI's ability to generate more ideas (e.g., drug targets) is less valuable than its ability to aid in decision-making. Physical constraints on experimentation mean you can't test everything. The real need is for tools that help humans evaluate, prioritize, and gain conviction on a few key bets.
Despite AI's power, 90% of drugs fail in clinical trials. John Jumper argues the bottleneck isn't finding molecules that target proteins, but our fundamental lack of understanding of disease causality, like with Alzheimer's, which is a biology problem, not a technology one.
The progress of AI in predicting cancer treatment is stalled not by algorithms, but by the data used to train them. Relying solely on static genetic data is insufficient. The critical missing piece is functional, contextual data showing how patient cells actually respond to drugs.
Demis Hassabis identifies a key obstacle for AGI. Unlike in math or games where answers can be verified, the messy real world lacks clear success metrics. This makes it difficult for AI systems to use self-improvement loops, limiting their ability to learn and adapt outside of highly structured domains.
The bottleneck for AI in drug development isn't the sophistication of the models but the absence of large-scale, high-quality biological data sets. Without comprehensive data on how drugs interact within complex human systems, even the best AI models cannot make accurate predictions.
Current LLMs fail at science because they lack the ability to iterate. True scientific inquiry is a loop: form a hypothesis, conduct an experiment, analyze the result (even if incorrect), and refine. AI needs this same iterative capability with the real world to make genuine discoveries.
Traditional science failed to create equations for complex biological systems because biology is too "bespoke." AI succeeds by discerning patterns from vast datasets, effectively serving as the "language" for modeling biology, much like mathematics is the language of physics.
The founder of AI and robotics firm Medra argues that scientific progress is not limited by a lack of ideas or AI-generated hypotheses. Instead, the critical constraint is the physical capacity to test these ideas and generate high-quality data to train better AI models.