Society rarely bans powerful new technologies, no matter how dangerous. Instead, like with fire, we develop systems to manage risk (e.g., fire departments, alarms). This provides a historical lens for current debates around transformative technologies like AI, suggesting adaptation over prohibition.

Related Insights

The common analogy of AI to electricity is dangerously rosy. AI is more like fire: a transformative tool that, if mismanaged or weaponized, can spread uncontrollably with devastating consequences. This mental model better prepares us for AI's inherent risks and accelerating power.

It's futile to debate *whether* transformative technologies like AI and robotics should be developed. If a technology offers a decisive advantage, it *will* be built, regardless of the risks. The only rational approach is to accept its inevitability and focus all energy on managing its implementation to stay ahead.

Initial public fear over new technologies like AI therapy, while seemingly negative, is actually productive. It creates the social and political pressure needed to establish essential safety guardrails and regulations, ultimately leading to safer long-term adoption.

Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.

The belief that AI development is unstoppable ignores history. Global treaties successfully limited nuclear proliferation, phased out ozone-depleting CFCs, and banned blinding lasers. These precedents prove that coordinated international action can steer powerful technologies away from the worst outcomes.

Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.

AI will create negative consequences, like the internet spawned the dark web. However, its potential to solve major problems like disease and energy scarcity makes its development a net positive for society, justifying the risks that must be managed along the way.

Other scientific fields operate under a "precautionary principle," avoiding experiments with even a small chance of catastrophic outcomes (e.g., creating dangerous new lifeforms). The AI industry, however, proceeds with what Bengio calls "crazy risks," ignoring this fundamental safety doctrine.

The history of nuclear power, where regulation transformed an exponential growth curve into a flat S-curve, serves as a powerful warning for AI. This suggests that AI's biggest long-term hurdle may not be technical limits but regulatory intervention that stifles its potential for a "fast takeoff," effectively regulating it out of rapid adoption.

The approach to AI safety isn't new; it mirrors historical solutions for managing technological risk. Just as Benjamin Franklin's 18th-century fire insurance company created building codes and inspections to reduce fires, a modern AI insurance market can drive the creation and adoption of safety standards and audits for AI agents.

Dangerous Technologies Inspire Management Systems, Not Outright Bans | RiffOn