Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

When running multiple independent but parallel experiments, include well-characterized compounds in every group. These "anchor compounds" serve as internal calibration references, creating a baseline that allows for robust and reliable comparison of results across the otherwise separate experimental sets.

Related Insights

By using foundation models to analyze vast datasets, companies can create a synthetic 'standard of care' arm for single-arm Phase 1 trials. The AI matches patients based on deep clinical and genomic parameters, providing insights comparable to a much larger Phase 3 trial.

To mitigate data variations caused by running experiments on different days (batch effects), Noetik employs a sophisticated arraying strategy. They take dozens of samples from a single tumor and distribute them across multiple, randomized arrays, ensuring each patient is represented in different batches for robust calibration and model training.

To avoid overfitting and prove true generalization, Bolts validates its protein design models by testing them across a wide array of targets from over 25 external academic and industry labs. This diverse, real-world testing is the ultimate benchmark of a model's utility in drug discovery.

Traditional ELISA techniques for biologics are slow and expensive, requiring separate validations for each molecule. Modern mass spectrometry can analyze a mixture of biologics (e.g., six antibodies) in a single, more accurate run, potentially cutting the analytical portion of development costs by 50%.

The standard practice is to optimize for productivity (titer) first, then correct for quality (glycosylation) later. This is reactive and inefficient. Successful teams integrate glycan analysis into their very first screening experiments, making informed, real-time trade-offs between productivity and quality attributes.

Instead of one massive experiment, split numerous factors into smaller, biologically-themed groups. Running these focused experiments in parallel is superior to both one-factor-at-a-time and large DOE approaches, as it maintains the breadth of a large screen while providing the high-quality signal of a small one.

To optimize a complex biosimilar profile with many correlated attributes like glycoforms, use Mahalanobis distance. It calculates a single multivariate distance to the target profile, correctly accounting for inter-glycoform correlations, providing an objective, data-driven method for ranking experimental outcomes.

A single, massive Design of Experiments (DOE) for screening many compounds is flawed. Adding numerous stock solutions causes dilution, untested combinations can be toxic to cells, and the strong effect of one compound can mask the subtler, yet crucial, effects of others, leading to poor data quality.

Two critical mistakes derail glycoengineering efforts. First, delaying analytical feedback on glycan profiles turns optimization into blind guesswork. Second, failing to test interactions with other process parameters like pH and temperature early on creates a process that is not robust and is prone to failure at scale.

Since true AI explainability is still elusive, a practical strategy for managing risk is benchmarking. By running a new AI model alongside the current one and comparing their outputs on a defined set of tests, companies can identify and address issues like bias or unexpected behavior before a full rollout.

Include "Anchor Compounds" in Parallel Experiments for Cross-Group Calibration | RiffOn