The discourse often presents a binary: AI plateaus below human level or undergoes a runaway singularity. A plausible but overlooked alternative is a "superhuman plateau," where AI is vastly superior to humans but still constrained by physical limits, transforming society without becoming omnipotent.

Related Insights

While discourse often focuses on exponential growth, the AI Safety Report presents 'progress stalls' as a serious scenario, analogous to passenger aircraft speed, which plateaued after 1960. This highlights that continued rapid advancement is not guaranteed due to potential technical or resource bottlenecks.

Framing AGI as reaching human-level intelligence is a limiting concept. Unconstrained by biology, AI will rapidly surpass the best human experts in every field. The focus should be on harnessing this superhuman capability, not just achieving parity.

Instead of a single "AGI" event, AI progress is better understood in three stages. We're in the "powerful tools" era. The next is "powerful agents" that act autonomously. The final stage, "autonomous organizations" that outcompete human-led ones, is much further off due to capability "spikiness."

AI's capabilities are highly uneven. Models are already superhuman in specific domains like speaking 150 languages or possessing encyclopedic knowledge. However, they still fail at tasks typical humans find easy, such as continual learning or nuanced visual reasoning like understanding perspective in a photo.

Broad improvements in AI's general reasoning are plateauing due to data saturation. The next major phase is vertical specialization. We will see an "explosion" of different models becoming superhuman in highly specific domains like chemistry or physics, rather than one model getting slightly better at everything.

The ultimate outcome of AI might not be a singular superintelligence ("Digital God") but an infinite supply of competent, 120-IQ digital workers ("Digital Guys"). While less dramatic than AGI, creating an infinite, reliable workforce would still be profoundly transformative for the global economy.

The discourse around AGI is caught in a paradox. Either it is already emerging, in which case it's less a cataclysmic event and more an incremental software improvement, or it remains a perpetually receding future goal. This captures the tension between the hype of superhuman intelligence and the reality of software development.

Defining AGI as 'human-equivalent' is too limiting because human intelligence is capped by biology (e.g., an IQ of ~160). The truly transformative moment is when AI systems surpass these biological limits, providing access to problem-solving capabilities that are fundamentally greater than any human's.

A more likely AI future involves an ecosystem of specialized agents, each mastering a specific domain (e.g., physical vs. digital worlds), rather than a single, monolithic AGI that understands everything. These agents will require protocols to interact.

DeepMind's Shane Legg argues that human intelligence is not the upper limit because the brain is constrained by biology (20-watt power, slow electrochemical signals). Data centers have orders of magnitude advantages in power, bandwidth, and signal speed, making superhuman AI a physical certainty.