Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Jenny Yang cites physicist Richard Feynman's idea that "the easiest people to fool are ourselves." She applies this to biotech by stressing the need for extreme scientific rigor. Innovators must actively challenge their own results and avoid confirmation bias, especially when developing technologies that impact human health.

Related Insights

True scientific progress comes from being proven wrong. When an experiment falsifies a prediction, it definitively rules out a potential model of reality, thereby advancing knowledge. This mindset encourages researchers to embrace incorrect hypotheses as learning opportunities rather than failures, getting them closer to understanding the world.

The most valuable lessons in clinical trial design come from understanding what went wrong. By analyzing the protocols of failed studies, researchers can identify hidden biases, flawed methodologies, and uncontrolled variables, learning precisely what to avoid in their own work.

To combat confirmation bias, withhold the final results of an experiment or analysis until the entire team agrees the methodology is sound. This prevents people from subconsciously accepting expected outcomes while overly scrutinizing unexpected ones, leading to more objective conclusions.

Gurus often cite legitimate scientific failures to undermine all scientific authority. However, these crises are often caused by a deviation from core scientific principles (e.g., lack of replication). The solution isn't to embrace less rigorous systems but to double down on scientific methods like open science.

The strength of scientific progress comes from 'individual humility'—the constant process of questioning assumptions and actively searching for errors. This embrace of being wrong, or doubting one's own work, is not a weakness but a superpower that leads to breakthroughs.

Reflecting on his PhD, Terry Rosen emphasizes that experiments that fail are often the most telling. Instead of discarding negative results, scientists should analyze them deeply. Understanding *why* something didn't work provides critical insights that are essential for iteration and eventual success.

A key risk for AI in healthcare is its tendency to present information with unwarranted certainty, like an "overconfident intern who doesn't know what they don't know." To be safe, these systems must display "calibrated uncertainty," show their sources, and have clear accountability frameworks for when they are inevitably wrong.

True scientific advancement happens when researchers refuse to accept 'no' as an answer. When immunotherapy was dismissed for lung cancer, pioneers investigated why it worked in melanoma but not other cancers. This mindset—questioning failures and studying successes—is key to turning scientific impossibilities into standard treatments.

Dr. Jordan Schlain frames AI in healthcare as fundamentally different from typical tech development. The guiding principle must shift from Silicon Valley's "move fast and break things" to "move fast and not harm people." This is because healthcare is a "land of small errors and big consequences," requiring robust failure plans and accountability.

The scientific process is vulnerable to human fallibility, as scientists are prone to bias and resistance to counterintuitive ideas. Physicist Robert Millikan spent 12 years trying to disprove Einstein's quantum theories, unintentionally gathering the very data that proved them right.