The Environmental Modification Treaty successfully prohibited the weaponization of environmental forces before the technology was mature. This serves as a key historical precedent, demonstrating that global consensus can be reached to foreclose on dangerous technological paths well in advance of their creation.
Tech billionaire Bill Gates supports a radical concept called solar radiation management: releasing aerosols to reflect sunlight and cool the planet. This moves the idea of a "sun visor for Earth" from science fiction to a seriously considered, albeit controversial, last-resort solution for climate tipping points.
For a blueprint on AI governance, look to Cold War-era geopolitics, not just tech history. The 1967 UN Outer Space Treaty, which established cooperation between the US and Soviet Union, shows that global compromise on new frontiers is possible even amidst intense rivalry. It provides a model for political, not just technical, solutions.
The path to surviving superintelligence is political: a global pact to halt its development, mirroring Cold War nuclear strategy. Success hinges on all leaders understanding that anyone building it ensures their own personal destruction, removing any incentive to cheat.
The very researchers who saw creating mirror life as a grand scientific challenge are now its staunchest opponents after analyzing the risks. This powerful example of scientific self-regulation saw pioneers of the field pivot from creation to prevention, forming an interdisciplinary group to warn the world.
Society rarely bans powerful new technologies, no matter how dangerous. Instead, like with fire, we develop systems to manage risk (e.g., fire departments, alarms). This provides a historical lens for current debates around transformative technologies like AI, suggesting adaptation over prohibition.
Researchers can avoid the immense risk of creating mirror life for study. Instead, they can develop mirror-image countermeasures (like mirror antibodies) and test them against normal bacteria. If effective, the 'normal' version of that countermeasure would work against mirror life, allowing for safe R&D.
The belief that AI development is unstoppable ignores history. Global treaties successfully limited nuclear proliferation, phased out ozone-depleting CFCs, and banned blinding lasers. These precedents prove that coordinated international action can steer powerful technologies away from the worst outcomes.
Unlike AI or nuclear power, mirror life offers minimal foreseeable benefits but poses catastrophic risks. This lack of a strong commercial or economic driver makes it politically easier to build a global consensus for a moratorium or ban, as there are few powerful interests advocating for its development.
Other scientific fields operate under a "precautionary principle," avoiding experiments with even a small chance of catastrophic outcomes (e.g., creating dangerous new lifeforms). The AI industry, however, proceeds with what Bengio calls "crazy risks," ignoring this fundamental safety doctrine.
International AI treaties are feasible. Just as nuclear arms control monitors uranium and plutonium, AI governance can monitor the choke point for advanced AI: high-end compute chips from companies like NVIDIA. Tracking the global distribution of these chips could verify compliance with development limits.