Contrary to common AI risk narratives, technologically advanced societies conquering less advanced ones (e.g., Spanish in Mexico) rarely resulted in total genocide. They often integrated the existing elite into their new system for practical governance, suggesting AIs might find it more rational to incorporate humans rather than eliminate them.
The discourse often presents a binary: AI plateaus below human level or undergoes a runaway singularity. A plausible but overlooked alternative is a "superhuman plateau," where AI is vastly superior to humans but still constrained by physical limits, transforming society without becoming omnipotent.
Fears of a superintelligent AI takeover are based on 'thinkism'—the flawed belief that intelligence trumps all else. To have an effect in the real world requires other traits like perseverance and empathy. Intelligence is necessary but not sufficient, and the will to survive will always overwhelm the will to predate.
Fears of AI's 'recursive self-improvement' should be contextualized. Every major general-purpose technology, from iron to computers, has been used to improve itself. While AI's speed may differ, this self-catalyzing loop is a standard characteristic of transformative technologies and has not previously resulted in runaway existential threats.
For some policy experts, the most realistic nightmare scenario is not a rogue superintelligence but a socio-economic collapse into techno-feudalism. In this future, AI concentrates power and wealth, creating a rentier state with a small ruling class and a large population with minimal economic agency or purpose.
The assumption that superintelligence will inevitably rule is flawed. In human society, raw IQ is not the primary determinant of power, as evidenced by PhDs often working for MBAs. This suggests an AGI wouldn't automatically dominate humanity simply by being smarter.
Society rarely bans powerful new technologies, no matter how dangerous. Instead, like with fire, we develop systems to manage risk (e.g., fire departments, alarms). This provides a historical lens for current debates around transformative technologies like AI, suggesting adaptation over prohibition.
The idea that AI development is a winner-take-all race to AGI is a compelling story that simplifies complex realities. This narrative is strategically useful as it creates a pretext for aggressive, 'do whatever it takes' behavior, sidestepping the messier nature of real-world conflict.
A superintelligent AI doesn't need to be malicious to destroy humanity. Our extinction could be a mere side effect of its resource consumption (e.g., overheating the planet), a logical step to acquire our atoms, or a preemptive measure to neutralize us as a potential threat.
AI safety scenarios often miss the socio-political dimension. A superintelligence's greatest threat isn't direct action, but its ability to recruit a massive human following to defend it and enact its will. This makes simple containment measures like 'unplugging it' socially and physically impossible, as humans would protect their new 'leader'.
Viewing AI as just a technological progression or a human assimilation problem is a mistake. It is a "co-evolution." The technology's logic shapes human systems, while human priorities, rivalries, and malevolence in turn shape how the technology is developed and deployed, creating unforeseen risks and opportunities.