A practical definition of AGI is its capacity to function as a 'drop-in remote worker,' fully substituting for a human on long-horizon tasks. Today's AI, despite genius-level abilities in narrow domains, fails this test because it cannot reliably string together multiple tasks over extended periods, highlighting the 'jagged frontier' of its abilities.

Related Insights

A consortium including leaders from Google and DeepMind has defined AGI as matching the cognitive versatility of a "well-educated adult" across 10 domains. This new framework moves beyond abstract debate, showing a concrete 30-point leap in AGI score from GPT-4 (27%) to a projected GPT-5 (57%).

Dan Shipper proposes a practical, economic definition for AGI that sidesteps philosophical debates. We will have AGI when AI agents are so capable at continuous learning, memory management, and proactive work that the cognitive and economic cost of restarting them for each task outweighs the benefit of turning them off.

AI intelligence shouldn't be measured with a single metric like IQ. AIs exhibit "jagged intelligence," being superhuman in specific domains (e.g., mastering 200 languages) while simultaneously lacking basic capabilities like long-term planning, making them fundamentally unlike human minds.

A practical definition of AGI is an AI that operates autonomously and persistently without continuous human intervention. Like a child gaining independence, it would manage its own goals and learn over long periods—a capability far beyond today's models that require constant prompting to function.

Instead of a single, generalizable AI, we are creating 'Functional AGI'—a collection of specialized AIs layered together. This system will feel like AGI to users but lacks true cross-domain reasoning, as progress in one area (like coding) doesn't translate to others (like history).

The popular concept of AGI as a static, all-knowing entity is flawed. A more realistic and powerful model is one analogous to a 'super intelligent 15-year-old'—a system with a foundational capacity for rapid, continual learning. Deployment would involve this AI learning on the job, not arriving with complete knowledge.

The current focus on pre-training AI with specific tool fluencies overlooks the crucial need for on-the-job, context-specific learning. Humans excel because they don't need pre-rehearsal for every task. This gap indicates AGI is further away than some believe, as true intelligence requires self-directed, continuous learning in novel environments.

Moving away from abstract definitions, Sequoia Capital's Pat Grady and Sonia Huang propose a functional definition of AGI: the ability to figure things out. This involves combining baseline knowledge (pre-training) with reasoning and the capacity to iterate over long horizons to solve a problem without a predefined script, as seen in emerging coding agents.

Cutting through abstract definitions, Quora CEO Adam D'Angelo offers a practical benchmark for AGI: an AI that can perform any job a typical human can do remotely. This anchors the concept to tangible economic impact, providing a more useful milestone than philosophical debates on consciousness.

Current AI models exhibit "jagged intelligence," performing at a PhD level on some tasks but failing at simple ones. Google DeepMind's CEO identifies this inconsistency and lack of reliability as a primary barrier to achieving true, general-purpose AGI.