Despite their prevalence, simulations like MD and DFT often fail in practice. They excel at modeling idealized, perfect systems but cannot handle the complexity of real-world, 'interesting' materials with defects and dopants. This discrepancy makes their practical utility much lower than is often believed.
Demis Hassabis notes that while generative AI can create visually realistic worlds, their underlying physics are mere approximations. They look correct casually but fail rigorous tests. This gap between plausible and accurate physics is a key challenge that must be solved before these models can be reliably used for robotics training.
To evolve AI from pattern matching to understanding physics for protein engineering, structural data is insufficient. Models need physical parameters like Gibbs free energy (delta-G), obtainable from affinity measurements, to become truly predictive and transformative for therapeutic development.
There's a significant gap between AI performance in simulated benchmarks and in the real world. Despite scoring highly on evaluations, AIs in real deployments make "silly mistakes that no human would ever dream of doing," suggesting that current benchmarks don't capture the messiness and unpredictability of reality.
A classical, bottom-up simulation of a cell is infeasible, according to John Jumper. He sees the more practical path forward as fusing specialized models like AlphaFold with the broad reasoning of LLMs to create hybrid systems that understand biology.
DE Shaw Research (DESRES) invested heavily in custom silicon for molecular dynamics (MD) to solve protein folding. In contrast, DeepMind's AlphaFold, using ML on experimental data, solved it on commodity hardware. This demonstrates data-driven approaches can be vastly more effective than brute-force simulation for complex scientific problems.
Simply scaling models on internet data won't solve specialized problems like curing cancer or discovering materials. While scaling laws hold for in-domain tasks, the model must be optimized against the specific data distribution it needs to learn from—which for science, requires generating new experimental data.
A formal proof doesn't make a system "perfect"; it only answers the specific properties you asked it to prove. Thinking of it as a perfect query engine, a system can be proven against 5,000 properties, but a critical flaw might exist in the 5,001st property you never thought to ask about.
While a world model can generate a physically plausible arch, it doesn't understand the underlying physics of force distribution. This gap between pattern matching and causal reasoning is a fundamental split between AI and human intelligence, making current models unsuitable for mission-critical applications like architecture.
To ensure scientific validity and mitigate the risk of AI hallucinations, a hybrid approach is most effective. By combining AI's pattern-matching capabilities with traditional physics-based simulation methods, researchers can create a feedback loop where one system validates the other, increasing confidence in the final results.
Demis Hassabis identifies a key obstacle for AGI. Unlike in math or games where answers can be verified, the messy real world lacks clear success metrics. This makes it difficult for AI systems to use self-improvement loops, limiting their ability to learn and adapt outside of highly structured domains.