An initially moderate pessimistic stance on new technology often escalates into advocacy for draconian policies. The 1970s ban on civilian nuclear power is a prime example of a fear-based decision that created catastrophic long-term consequences, including strengthening geopolitical rivals.

Related Insights

Instead of defaulting to skepticism and looking for reasons why something won't work, the most productive starting point is to imagine how big and impactful a new idea could become. After exploring the optimistic case, you can then systematically address and mitigate the risks.

Society rarely bans powerful new technologies, no matter how dangerous. Instead, like with fire, we develop systems to manage risk (e.g., fire departments, alarms). This provides a historical lens for current debates around transformative technologies like AI, suggesting adaptation over prohibition.

Initial public fear over new technologies like AI therapy, while seemingly negative, is actually productive. It creates the social and political pressure needed to establish essential safety guardrails and regulations, ultimately leading to safer long-term adoption.

A ban on superintelligence is self-defeating because enforcement would require a sanctioned, global government body to build the very technology it prohibits in order to "prove it's safe." This paradoxically creates a state-controlled monopoly on the most powerful technology ever conceived, posing a greater risk than a competitive landscape.

The political left requires a central catastrophe narrative to justify its agenda of economic regulation and information control. As the "climate doomerism" narrative loses potency, "AI doomerism" is emerging as its successor—a new, powerful rationale for centralizing power over the tech industry.

A regulator who approves a new technology that fails faces immense public backlash and career ruin. Conversely, they receive little glory for a success. This asymmetric risk profile creates a powerful incentive to deny or delay new innovations, preserving the status quo regardless of potential benefits.

Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.

The same fear-based arguments and political forces that halted nuclear fission are now re-emerging to block fusion. Ironically, the promise of a future fusion 'savior' is being used as another excuse to prevent the deployment of existing, proven zero-emission fission technology today.

Perception of nuclear power is sharply divided by age. Those who remember the Three Mile Island accident are fearful, while younger generations, facing the climate crisis, see it as a clean solution. As this younger cohort gains power, a return to nuclear energy becomes increasingly likely.

The history of nuclear power, where regulation transformed an exponential growth curve into a flat S-curve, serves as a powerful warning for AI. This suggests that AI's biggest long-term hurdle may not be technical limits but regulatory intervention that stifles its potential for a "fast takeoff," effectively regulating it out of rapid adoption.