We scan new podcasts and send you the top 5 insights daily.
Myome and Natera are building foundational models for oncology that function like genomic language models. By training on vast cancer sequence and clinical data, these models learn the context of a patient's disease to predict the next mutation, similar to how transformers like GPT predict the next word in a sentence.
Beyond early discovery, LLMs deliver significant value in clinical trials. They accelerate timelines by automating months of post-trial documentation work. More strategically, they can improve trial success rates by analyzing genomic data to identify patient populations with a higher likelihood of responding to a treatment.
The next major AI breakthrough will come from applying generative models to complex systems beyond human language, such as biology. By treating biological processes as a unique "language," AI could discover novel therapeutics or research paths, leading to a "Move 37" moment in science.
Genomics (DNA/RNA) only provides the 'sheet music' for cancer. Functional Precision Medicine acts as the orchestra, testing how live tumor cells respond to drugs in real time. AI serves as the conductor, optimizing the 'performance' for superior outcomes.
Earli combines wet lab experiments with AI in a continuous feedback loop. They test massive libraries of synthetic DNA promoter sequences, feed the performance results into a Large Language Model (LLM), which then designs new, potentially more effective sequences. This iterative process rapidly optimizes their cancer-specific genetic switches.
Instead of creating therapies for hundreds of specific driver mutations, which vary widely between patients, Earli's platform targets downstream commonalities—the "hallmarks of cancer" like rapid cell proliferation. These pathways are where diverse mutations converge, creating a more universal and reliable target across different cancers.
The progress of AI in predicting cancer treatment is stalled not by algorithms, but by the data used to train them. Relying solely on static genetic data is insufficient. The critical missing piece is functional, contextual data showing how patient cells actually respond to drugs.
A major misconception is that general-purpose Large Language Models (LLMs) can be readily applied to complex biological problems. Biological data, like RNA sequencing, constitutes a unique language that requires custom-built foundation models, not simply fine-tuning of existing LLMs.
Achieving explainability in AI for drug development isn't about post-hoc analysis. It requires building models from the ground up using inherently interpretable data like RNA sequencing and mutational profiles. When the inputs are explainable, the model's outputs become explainable by design.
Generate Biomedicines' AI learns the fundamental rules of protein structure and function, much like a language's grammar. This allows it to design entirely new proteins by generating novel "sentences" (sequences) that are biologically coherent and functional, rather than just mimicking existing ones found in nature.
A major frustration in genetics is finding 'variants of unknown significance' (VUS)—genetic anomalies with no known effect. AI models promise to simulate the impact of these unique variants on cellular function, moving medicine from reactive diagnostics to truly personalized, predictive health.