Counterintuitively, Nobel laureate John Jumper's path to AI began not with abundant resources, but as a way to use sophisticated algorithms to compensate for a lack of computational power for protein simulations during his PhD.

Related Insights

While more data and compute yield linear improvements, true step-function advances in AI come from unpredictable algorithmic breakthroughs like Transformers. These creative ideas are the most difficult to innovate on and represent the highest-leverage, yet riskiest, area for investment and research focus.

Caltech professor Frances Arnold developed her Nobel-winning "directed evolution" method out of desperation. Realizing her biochemistry knowledge was limited compared to peers using "rational design," she embraced a high-volume, random approach that let the experiment, not her intellect, find the solution.

A classical, bottom-up simulation of a cell is infeasible, according to John Jumper. He sees the more practical path forward as fusing specialized models like AlphaFold with the broad reasoning of LLMs to create hybrid systems that understand biology.

A "software-only singularity," where AI recursively improves itself, is unlikely. Progress is fundamentally tied to large-scale, costly physical experiments (i.e., compute). The massive spending on experimental compute over pure researcher salaries indicates that physical experimentation, not just algorithms, remains the primary driver of breakthroughs.

The history of AI, such as the 2012 AlexNet breakthrough, demonstrates that scaling compute and data on simpler, older algorithms often yields greater advances than designing intricate new ones. This "bitter lesson" suggests prioritizing scalability over algorithmic complexity for future progress.

John Jumper contends that science has always operated with partial understanding, citing early crystallography and Roman engineering. He suggests demanding perfect 'black box' clarity for AI is a peculiar and unrealistic standard not applied to other scientific tools.

AlphaFold's success in identifying a key protein for human fertilization (out of 2,000 possibilities) showcases AI's power. It acts as a hypothesis generator, dramatically reducing the search space for expensive and time-consuming real-world experiments.

Despite AI's power, 90% of drugs fail in clinical trials. John Jumper argues the bottleneck isn't finding molecules that target proteins, but our fundamental lack of understanding of disease causality, like with Alzheimer's, which is a biology problem, not a technology one.

When LLMs became too computationally expensive for universities, AI research pivoted. Academics flocked to areas like 3D vision, where breakthroughs like NeRF allowed for state-of-the-art results on a single GPU. This resource constraint created a vibrant, accessible, and innovative research ecosystem away from giant models.

John Jumper uses an analogy to explain the leap in complexity from prediction to design. Predicting a protein's structure is like recognizing a bicycle's parts. Designing a new, functional protein is like building a working bicycle—requiring every detail to be correct.

AlphaFold Creator John Jumper Pursued AI Due to a Lack of Computing Power | RiffOn