For some policy experts, the most realistic nightmare scenario is not a rogue superintelligence but a socio-economic collapse into techno-feudalism. In this future, AI concentrates power and wealth, creating a rentier state with a small ruling class and a large population with minimal economic agency or purpose.
Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.
A key, informal safety layer against AI doom is the institutional self-preservation of the developers themselves. It's argued that labs like OpenAI or Google would not knowingly release a model they believed posed a genuine threat of overthrowing the government, opting instead to halt deployment and alert authorities.
The argument that the U.S. must race to build superintelligence before China is flawed. The Chinese Communist Party's primary goal is control. An uncontrollable AI poses a direct existential threat to their power, making them more likely to heavily regulate or halt its development rather than recklessly pursue it.
An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.
Broad, high-level statements calling for an AI ban are not intended as draft legislation but as tools to build public consensus. This strategy mirrors past social movements, where achieving widespread moral agreement on a vague principle (e.g., against child pornography) was a necessary precursor to creating detailed, expert-crafted laws.
The 'FDA for AI' analogy is flawed because the FDA's rigid, one-drug-one-disease model is ill-suited for a general-purpose technology. This structure struggles with modern personalized medicine, and a similar top-down regime for AI could embed faulty assumptions, stifling innovation and adaptability for a rapidly evolving field.
Fears of AI's 'recursive self-improvement' should be contextualized. Every major general-purpose technology, from iron to computers, has been used to improve itself. While AI's speed may differ, this self-catalyzing loop is a standard characteristic of transformative technologies and has not previously resulted in runaway existential threats.
The core disagreement between AI safety advocate Max Tegmark and former White House advisor Dean Ball stems from their vastly different probabilities of AI-induced doom. Tegmark’s >90% justifies preemptive regulation, while Ball’s 0.01% favors a reactive, innovation-friendly approach. Their policy stances are downstream of this fundamental risk assessment.
