OpenAI's CEO believes the term "AGI" is ill-defined and its milestone may have passed without fanfare. He proposes focusing on "superintelligence" instead, defining it as an AI that can outperform the best human at complex roles like CEO or president, creating a clearer, more impactful threshold.

Related Insights

OpenAI co-founder Ilya Sutskever suggests the path to AGI is not creating a pre-trained, all-knowing model, but an AI that can learn any task as effectively as a human. This reframes the challenge from knowledge transfer to creating a universal learning algorithm, impacting how such systems would be deployed.

A consortium including leaders from Google and DeepMind has defined AGI as matching the cognitive versatility of a "well-educated adult" across 10 domains. This new framework moves beyond abstract debate, showing a concrete 30-point leap in AGI score from GPT-4 (27%) to a projected GPT-5 (57%).

The popular conception of AGI as a pre-trained system that knows everything is flawed. A more realistic and powerful goal is an AI with a human-like ability for continual learning. This system wouldn't be deployed as a finished product, but as a 'super-intelligent 15-year-old' that learns and adapts to specific roles.

Instead of a single "AGI" event, AI progress is better understood in three stages. We're in the "powerful tools" era. The next is "powerful agents" that act autonomously. The final stage, "autonomous organizations" that outcompete human-led ones, is much further off due to capability "spikiness."

The popular concept of AGI as a static, all-knowing entity is flawed. A more realistic and powerful model is one analogous to a 'super intelligent 15-year-old'—a system with a foundational capacity for rapid, continual learning. Deployment would involve this AI learning on the job, not arriving with complete knowledge.

Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.

The definition of AGI is a moving goalpost. Scott Wu argues that today's AI meets the standards that would have been considered AGI a decade ago. As technology automates tasks, human work simply moves to a higher level of abstraction, making percentage-based definitions of AGI flawed.

OpenAI's new GDP-val benchmark evaluates models on complex, real-world knowledge work tasks, not abstract IQ tests. This pivot signifies that the true measure of AI progress is now its ability to perform economically valuable human jobs, making performance metrics directly comparable to professional output.

The race to manage AGI is hampered by a philosophical problem: there's no consensus definition for what it is. We might dismiss true AGI's outputs as "hallucinations" because they don't fit our current framework, making it impossible to know when the threshold from advanced AI to true general intelligence has actually been crossed.

Ilya Sutskever argues that the AI industry's "age of scaling" (2020-2025) is insufficient for achieving superintelligence. He posits that the next leap requires a return to the "age of research" to discover new paradigms, as simply making existing models 100x larger won't be enough for a breakthrough.