Sam Harris highlights the bizarre cultural phenomenon of AI leaders openly stating high probabilities (e.g., 20%) for existential risk while racing to build the technology. He contrasts this with Manhattan Project scientists, who proceeded only after calculating the risk of igniting the atmosphere as infinitesimal, not a double-digit percentage.
Unlike a plague or asteroid, the existential threat of AI is 'entertaining' and 'interesting to think about.' This, combined with its immense potential upside, makes it psychologically difficult to maintain the rational level of concern warranted by the high-risk probabilities cited by its own creators.
The core disagreement between AI safety advocate Max Tegmark and former White House advisor Dean Ball stems from their vastly different probabilities of AI-induced doom. Tegmark’s >90% justifies preemptive regulation, while Ball’s 0.01% favors a reactive, innovation-friendly approach. Their policy stances are downstream of this fundamental risk assessment.
Many top AI CEOs openly admit the extinction-level risks of their work, with some estimating a 25% chance. However, they feel powerless to stop the race. If a CEO paused for safety, investors would simply replace them with someone willing to push forward, creating a systemic trap where everyone sees the danger but no one can afford to hit the brakes.
The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.
Leaders in AI and robotics appear to accept the risks of creating potentially uncontrollable, human-like AI, exemplified by their embrace of a 'Westworld' future. This 'why not?' attitude suggests a culture where the pursuit of technological possibility may overshadow cautious ethical deliberation and risk assessment.
Top AI leaders are motivated by a competitive, ego-driven desire to create a god-like intelligence, believing it grants them ultimate power and a form of transcendence. This 'winner-takes-all' mindset leads them to rationalize immense risks to humanity, framing it as an inevitable, thrilling endeavor.
AI companies minimizing existential risk mirrors historical examples like the tobacco and leaded gasoline industries. Immense, long-term public harm was knowingly caused for comparatively small corporate gains, enabled by powerful self-deception and rationalization.
OpenAI's Boaz Barak advises individuals to treat AI risk like the nuclear threat of the past. While society should worry about tail risks, individuals should focus on the high-probability space where their actions matter, rather than being paralyzed by a small probability of doom.
Bengio admits he unconsciously dismissed catastrophic AI risks for years. The turning point wasn't intellectual but emotional: realizing his work could endanger his own family's future after seeing ChatGPT's capabilities and thinking of his grandson.
Other scientific fields operate under a "precautionary principle," avoiding experiments with even a small chance of catastrophic outcomes (e.g., creating dangerous new lifeforms). The AI industry, however, proceeds with what Bengio calls "crazy risks," ignoring this fundamental safety doctrine.