Traditional science failed to create equations for complex biological systems because biology is too "bespoke." AI succeeds by discerning patterns from vast datasets, effectively serving as the "language" for modeling biology, much like mathematics is the language of physics.
To evolve AI from pattern matching to understanding physics for protein engineering, structural data is insufficient. Models need physical parameters like Gibbs free energy (delta-G), obtainable from affinity measurements, to become truly predictive and transformative for therapeutic development.
Powerful AI models for biology exist, but the industry lacks a breakthrough user interface—a "ChatGPT for science"—that makes them accessible, trustworthy, and integrated into wet lab scientists' workflows. This adoption and translation problem is the biggest hurdle, not the raw capability of the AI models themselves.
Unlike traditional methods that simulate physical interactions like a key in a lock, ProPhet's AI learns the fundamental patterns governing why certain molecules and proteins interact. This allows for prediction without needing slow, expensive, and often impossible physical or computational simulations.
Instead of building from scratch, ProPhet leverages existing transformer models to create unique mathematical 'languages' for proteins and molecules. Their core innovation is an additional model that translates between them, creating a unified space to predict interactions at scale.
A classical, bottom-up simulation of a cell is infeasible, according to John Jumper. He sees the more practical path forward as fusing specialized models like AlphaFold with the broad reasoning of LLMs to create hybrid systems that understand biology.
The next major AI breakthrough will come from applying generative models to complex systems beyond human language, such as biology. By treating biological processes as a unique "language," AI could discover novel therapeutics or research paths, leading to a "Move 37" moment in science.
Current AI can learn to predict complex patterns, like planetary orbits, from data. However, it struggles to abstract the underlying causal laws, such as Newtonian physics (F=MA). This leap to a higher level of abstraction remains a fundamental challenge beyond simple pattern recognition.
Just as biology deciphers the complex systems created by evolution, mechanistic interpretability seeks to understand the "how" inside neural networks. Instead of treating models as black boxes, it examines their internal parameters and activations to reverse-engineer how they work, moving beyond just measuring their external behavior.
AlphaFold's success in identifying a key protein for human fertilization (out of 2,000 possibilities) showcases AI's power. It acts as a hypothesis generator, dramatically reducing the search space for expensive and time-consuming real-world experiments.
The next frontier in preclinical research involves feeding multi-omics and spatial data from complex 3D cell models into AI algorithms. This synergy will enable a crucial shift from merely observing biological phenomena to accurately predicting therapeutic outcomes and patient responses.