We scan new podcasts and send you the top 5 insights daily.
The author of the viral "AI doom" piece clarifies it wasn't a forecast but an exploration of a bear case. He argues the most uncomfortable position for an investor is an inability to envision the downside. Articulating a potential negative scenario, even with low probability, is a crucial tool for risk management and mental preparedness.
Contrary to popular cynicism, ominous warnings about AI from leaders like Anthropic's CEO are often genuine. Ethan Mollick suggests these executives truly believe in the potential dangers of the technology they are creating, and it's not solely a marketing tactic to inflate its power.
Unlike a plague or asteroid, the existential threat of AI is 'entertaining' and 'interesting to think about.' This, combined with its immense potential upside, makes it psychologically difficult to maintain the rational level of concern warranted by the high-risk probabilities cited by its own creators.
Negative AI scenarios are more persuasive than utopian ones because of inherent cognitive biases. The "seen vs. unseen" bias makes it easier to visualize existing job losses than to imagine new job creation. The "fixed-pie fallacy" incorrectly frames economic growth and productivity gains as zero-sum.
The notable aspect of the Citrini Research piece isn't its dystopian predictions, but its widespread acceptance among investors. Unlike previous 'AI doomer sci-fi,' it's acting as confirmation bias for a market already grappling with AI's disruptive potential. The report's success signals a major shift in 'common knowledge' about AI's socioeconomic risks.
OpenAI's Boaz Barak advises individuals to treat AI risk like the nuclear threat of the past. While society should worry about tail risks, individuals should focus on the high-probability space where their actions matter, rather than being paralyzed by a small probability of doom.
Our brains are wired to find evidence that supports our existing beliefs. To counteract this dangerous bias in investing, actively search for dissenting opinions and information that challenge your thesis. A crucial question to ask is, 'What would need to happen for me to be wrong about this investment?'
Before committing capital, professional investors rigorously challenge their own assumptions. They actively ask, "If I'm wrong, why?" This process of stress-testing an idea helps avoid costly mistakes and strengthens the final thesis.
A core discipline from risk arbitrage is to precisely understand and quantify the potential downside before investing. By knowing exactly 'why we're going to lose money' and what that loss looks like, investors can better set probabilities and make more disciplined, unemotional decisions.
Economists are weighing two contradictory negative scenarios for AI. One where its rapid success causes massive job upheaval, and another where it fails to meet investor hype, leading to a stock market collapse and recession much like the dot-com bubble.
To fight overconfidence before a big decision, conduct a "premortem." Imagine the investment has already failed spectacularly and work backward to list all the plausible reasons for its failure. This exercise forces engagement of your analytical "System 2" brain, revealing risks your optimistic side would ignore.