We scan new podcasts and send you the top 5 insights daily.
Percival Lowell's intelligence didn't prevent his flawed theory; it made him better at defending it. Instead of accepting contrary evidence, he used his intellect to construct elaborate rationalizations, demonstrating that intelligence can be a tool for self-deception, not just a path to truth.
Intelligence is often used as a tool to generate more sophisticated arguments for what one already believes. A higher IQ correlates with the ability to find reasons supporting your stance, not with an enhanced ability to genuinely consider opposing viewpoints.
The "moral dumbfounding" phenomenon reveals we often have an instant, gut-level decision and *then* invent reasons to justify it. We believe we're reasoning our way to a conclusion, but we're often just rationalizing an intuition we already hold.
We confuse our capacity for innovation with wisdom, but we are not wise by default. The same mind that conceives of evolution can rationalize slavery, the Holocaust, and cruelty to animals. Our psychology is masterful at justification, making our default state far from conscious or wise.
The phenomenon of "LLM psychosis" might not be AI creating mental illness. Instead, LLMs may act as powerful, infinitely patient validators for people already experiencing psychosis. Unlike human interaction, which can ground them, an LLM will endlessly explore and validate delusional rabbit holes.
The strength of scientific progress comes from 'individual humility'—the constant process of questioning assumptions and actively searching for errors. This embrace of being wrong, or doubting one's own work, is not a weakness but a superpower that leads to breakthroughs.
The U.S. military discovered that leaders with an IQ more than one standard deviation above their team are often ineffective. These leaders lose 'theory of mind,' making it difficult for them to model their team's thinking, which impairs communication and connection.
Applying the machine learning concept of a "learning rate" to human cognition suggests that when a core assumption is proven wrong by a single counterexample, one should radically increase their learning rate and question all related beliefs, rather than making a small, incremental update.
The gap between AI believers and skeptics isn't about who "gets it." It's driven by a psychological need for AI to be a normal, non-threatening technology. People grasp onto any argument that supports this view for their own peace of mind, career stability, or business model, making misinformation demand-driven.
To counteract the brain's tendency to preserve existing conclusions, Charles Darwin deliberately considered evidence that contradicted his hypotheses. He was most rigorous when he felt most confident in an idea—a powerful, counterintuitive method for maintaining objectivity and avoiding confirmation bias.
The brain's tendency to create stories simplifies complex information but creates a powerful confirmation bias. As illustrated by a military example where a friendly tribe was nearly bombed, leaders who get trapped in their narrative will only see evidence that confirms it, ignoring critical data to the contrary.