Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Michael Shermer argues that phenomena like the replication crisis don't prove science is broken. Instead, the fact that these errors are discovered and publicized by other scientists and lab insiders (like graduate students) demonstrates that science's self-correcting mechanisms are functioning properly.

Related Insights

A hidden cause of the reproducibility crisis is how researchers select models like cell lines or mice. The choice is often driven by convenience—what a neighboring lab has available—rather than a systematic evaluation of which model is best suited to answer the specific scientific question.

True scientific progress comes from being proven wrong. When an experiment falsifies a prediction, it definitively rules out a potential model of reality, thereby advancing knowledge. This mindset encourages researchers to embrace incorrect hypotheses as learning opportunities rather than failures, getting them closer to understanding the world.

The most valuable lessons in clinical trial design come from understanding what went wrong. By analyzing the protocols of failed studies, researchers can identify hidden biases, flawed methodologies, and uncontrolled variables, learning precisely what to avoid in their own work.

Following the Galileo affair, the Inquisition felt a duty to verify scientific claims in books it was censoring. They established a laboratory to replicate experiments and test their truthfulness. This process of a second, independent body recreating results is the foundation of modern scientific peer review, ironically created by a body often seen as anti-science.

Gurus often cite legitimate scientific failures to undermine all scientific authority. However, these crises are often caused by a deviation from core scientific principles (e.g., lack of replication). The solution isn't to embrace less rigorous systems but to double down on scientific methods like open science.

The strength of scientific progress comes from 'individual humility'—the constant process of questioning assumptions and actively searching for errors. This embrace of being wrong, or doubting one's own work, is not a weakness but a superpower that leads to breakthroughs.

Reflecting on his PhD, Terry Rosen emphasizes that experiments that fail are often the most telling. Instead of discarding negative results, scientists should analyze them deeply. Understanding *why* something didn't work provides critical insights that are essential for iteration and eventual success.

The public appetite for surprising, "Freakonomics-style" insights creates a powerful incentive for researchers to generate headline-grabbing findings. This pressure can lead to data manipulation and shoddy science, contributing to the replication crisis in social sciences as researchers chase fame and book deals.

Jenny Yang cites physicist Richard Feynman's idea that "the easiest people to fool are ourselves." She applies this to biotech by stressing the need for extreme scientific rigor. Innovators must actively challenge their own results and avoid confirmation bias, especially when developing technologies that impact human health.

Physicist Brian Cox's most-cited paper explored what physics would look like without the Higgs boson. The subsequent discovery of the Higgs proved the paper's premise wrong, yet it remains highly cited for the novel detection techniques it developed. This illustrates that the value of scientific work often lies in its methodology and exploratory rigor, not just its ultimate conclusion.