/
© 2026 RiffOn. All rights reserved.
  1. "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
  2. Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates
Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis · Dec 10, 2025

Max Tegmark (P-Doom >90%) and Dean Ball (P-Doom 0.01%) clash over banning superintelligence, debating an FDA-style vs. reactive approach to AI.

A More Plausible AI Dystopia is Techno-Feudalism, Not Human Extinction

For some policy experts, the most realistic nightmare scenario is not a rogue superintelligence but a socio-economic collapse into techno-feudalism. In this future, AI concentrates power and wealth, creating a rentier state with a small ruling class and a large population with minimal economic agency or purpose.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago

Regulate AI's Harmful Outcomes, Not the Nebulous Concept of 'Superintelligence'

Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago

Major AI Labs Would Likely Self-Censor Models That Risk Overthrowing the Government

A key, informal safety layer against AI doom is the institutional self-preservation of the developers themselves. It's argued that labs like OpenAI or Google would not knowingly release a model they believed posed a genuine threat of overthrowing the government, opting instead to halt deployment and alert authorities.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago

China's Authoritarian Self-Preservation Instinct Makes It an Unlikely Superintelligence Competitor

The argument that the U.S. must race to build superintelligence before China is flawed. The Chinese Communist Party's primary goal is control. An uncontrollable AI poses a direct existential threat to their power, making them more likely to heavily regulate or halt its development rather than recklessly pursue it.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago

An 'FDA for AI' Would Shift the Safety Burden to Developers, Spurring a Research Boom

An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago

Vague AI Safety Statements Intentionally Build Political Will Before Legislation is Drafted

Broad, high-level statements calling for an AI ban are not intended as draft legislation but as tools to build public consensus. This strategy mirrors past social movements, where achieving widespread moral agreement on a vague principle (e.g., against child pornography) was a necessary precursor to creating detailed, expert-crafted laws.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago

The FDA's Industrial-Era Model is a Poor Fit for Regulating General-Purpose AI

The 'FDA for AI' analogy is flawed because the FDA's rigid, one-drug-one-disease model is ill-suited for a general-purpose technology. This structure struggles with modern personalized medicine, and a similar top-down regime for AI could embed faulty assumptions, stifling innovation and adaptability for a rapidly evolving field.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago

'Recursive Self-Improvement' is a Historical Feature of All General-Purpose Technologies

Fears of AI's 'recursive self-improvement' should be contextualized. Every major general-purpose technology, from iron to computers, has been used to improve itself. While AI's speed may differ, this self-catalyzing loop is a standard characteristic of transformative technologies and has not previously resulted in runaway existential threats.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago

AI Policy Disagreements Hinge on Drastically Different 'P-Doom' Estimates (90% vs 0.01%)

The core disagreement between AI safety advocate Max Tegmark and former White House advisor Dean Ball stems from their vastly different probabilities of AI-induced doom. Tegmark’s >90% justifies preemptive regulation, while Ball’s 0.01% favors a reactive, innovation-friendly approach. Their policy stances are downstream of this fundamental risk assessment.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates thumbnail

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·2 months ago