Dan Shipper proposes a practical, economic definition for AGI that sidesteps philosophical debates. We will have AGI when AI agents are so capable at continuous learning, memory management, and proactive work that the cognitive and economic cost of restarting them for each task outweighs the benefit of turning them off.

Related Insights

OpenAI co-founder Ilya Sutskever suggests the path to AGI is not creating a pre-trained, all-knowing model, but an AI that can learn any task as effectively as a human. This reframes the challenge from knowledge transfer to creating a universal learning algorithm, impacting how such systems would be deployed.

As AI models achieve previously defined benchmarks for intelligence (e.g., reasoning), their failure to generate transformative economic value reveals those benchmarks were insufficient. This justifies 'shifting the goalposts' for AGI. It is a rational response to realizing our understanding of intelligence was too narrow. Progress in impressiveness doesn't equate to progress in usefulness.

Today's AI models have surpassed the definition of Artificial General Intelligence (AGI) that was commonly accepted by AI researchers just over a decade ago. The debate continues because the goalposts for what constitutes "true" AGI have been moved.

A consortium including leaders from Google and DeepMind has defined AGI as matching the cognitive versatility of a "well-educated adult" across 10 domains. This new framework moves beyond abstract debate, showing a concrete 30-point leap in AGI score from GPT-4 (27%) to a projected GPT-5 (57%).

The popular conception of AGI as a pre-trained system that knows everything is flawed. A more realistic and powerful goal is an AI with a human-like ability for continual learning. This system wouldn't be deployed as a finished product, but as a 'super-intelligent 15-year-old' that learns and adapts to specific roles.

A practical definition of AGI is an AI that operates autonomously and persistently without continuous human intervention. Like a child gaining independence, it would manage its own goals and learn over long periods—a capability far beyond today's models that require constant prompting to function.

The popular concept of AGI as a static, all-knowing entity is flawed. A more realistic and powerful model is one analogous to a 'super intelligent 15-year-old'—a system with a foundational capacity for rapid, continual learning. Deployment would involve this AI learning on the job, not arriving with complete knowledge.

Moving away from abstract definitions, Sequoia Capital's Pat Grady and Sonia Huang propose a functional definition of AGI: the ability to figure things out. This involves combining baseline knowledge (pre-training) with reasoning and the capacity to iterate over long horizons to solve a problem without a predefined script, as seen in emerging coding agents.

Cutting through abstract definitions, Quora CEO Adam D'Angelo offers a practical benchmark for AGI: an AI that can perform any job a typical human can do remotely. This anchors the concept to tangible economic impact, providing a more useful milestone than philosophical debates on consciousness.

A useful mental model for AGI is child development. Just as a child can be left unsupervised for progressively longer periods, AI agents are seeing their autonomous runtimes increase. AGI arrives when it becomes economically profitable to let an AI work continuously without supervision, much like an independent adult.