Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While Noetik's models are trained on complex, multimodal data like spatial transcriptomics, they are designed to run inference using only standard, ubiquitous H&E pathology slides. This creates a highly scalable and practical path to a clinical diagnostic without requiring expensive, novel assays for every patient.

Related Insights

An AI algorithm, trained on thousands of samples, can analyze a simple photo of an unstained tumor slide and predict its ER-positive or ER-negative status with high confidence. This technology could revolutionize diagnostics and guide endocrine therapy in resource-limited settings where standard IHC testing is unavailable.

Noetik's core thesis is that the 95% failure rate in cancer trials isn't due to bad drug design, but an inability to identify the correct patient sub-population. Their models aim to solve this patient selection problem from the outset, rescuing potentially effective drugs.

To achieve an affordable price for its advanced cancer test, Delphi prioritizes algorithmic complexity over "wet lab" complexity. This strategy keeps physical sample processing simple and low-cost, putting the innovation into scalable software (AI/ML) to analyze the data, which is key for mass adoption.

To mitigate data variations caused by running experiments on different days (batch effects), Noetik employs a sophisticated arraying strategy. They take dozens of samples from a single tumor and distribute them across multiple, randomized arrays, ensuring each patient is represented in different batches for robust calibration and model training.

Instead of pursuing a purely academic goal of simulating every biochemical process, Noetik's "virtual cell" models are practical tools. They focus on understanding cell biology through heuristics that are useful for making drugs, like predicting a cell's transcriptome or protein expression in a specific context.

AI platforms can analyze existing medical images, like CT scans ordered for a cough, to find subtle, early signs of cancers. This repurposes vast amounts of routine diagnostic data into a powerful, passive screening tool, allowing for incidental discoveries of diseases like pancreatic cancer without new procedures.

Drawing an analogy from neuroscience, Noetik argues for a top-down modeling approach. Instead of building a perfect simulation of a single cell and scaling up, they model the functional interactions at the tissue level first. This abstraction is more likely to predict patient-level outcomes, which is the ultimate goal.

A major cause of clinical trial failure is that preclinical testing uses immortalized cancer cell lines cultured for decades. These cells have abnormal genomes and gene expressions that don't represent actual tumors, creating a massive translational gap that Noetik's patient-derived data aims to solve.

To bridge the gap between animal models and human trials, Noetik trains models on its human data and then runs inference on mouse histology (H&E) images. This allows them to predict human-relevant biology and gene expression directly from the mouse model, overcoming a key translational hurdle in drug development.

Demonstrating extreme conviction, Noetik invested a year and a half in lab setup, tumor sourcing, and data processing before having a dataset large enough to train its first models. This highlights the immense upfront investment and risk required for a data-first approach in bio-AI, where no off-the-shelf data exists.

Noetik Uses Standard H&E Pathology Images for Powerful, Low-Cost Clinical Inference | RiffOn