The definition of AGI is a moving goalpost. Scott Wu argues that today's AI meets the standards that would have been considered AGI a decade ago. As technology automates tasks, human work simply moves to a higher level of abstraction, making percentage-based definitions of AGI flawed.

Related Insights

The most immediate AI milestone is not singularity, but "Economic AGI," where AI can perform most virtual knowledge work better than humans. This threshold, predicted to arrive within 12-18 months, will trigger massive societal and economic shifts long before a "Terminator"-style superintelligence becomes a reality.

OpenAI co-founder Ilya Sutskever suggests the path to AGI is not creating a pre-trained, all-knowing model, but an AI that can learn any task as effectively as a human. This reframes the challenge from knowledge transfer to creating a universal learning algorithm, impacting how such systems would be deployed.

A consortium including leaders from Google and DeepMind has defined AGI as matching the cognitive versatility of a "well-educated adult" across 10 domains. This new framework moves beyond abstract debate, showing a concrete 30-point leap in AGI score from GPT-4 (27%) to a projected GPT-5 (57%).

The popular conception of AGI as a pre-trained system that knows everything is flawed. A more realistic and powerful goal is an AI with a human-like ability for continual learning. This system wouldn't be deployed as a finished product, but as a 'super-intelligent 15-year-old' that learns and adapts to specific roles.

Julian Schrittwieser, a key researcher from Anthropic and formerly Google DeepMind, forecasts that extrapolating current AI progress suggests models will achieve full-day autonomy and match human experts across many industries by mid-2026. This timeline is much shorter than many anticipate.

Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.

Contrary to the view that useful AI agents are a decade away, Andrew Ng asserts that agentic workflows are already solving complex business problems. He cites examples from his portfolio in tariff compliance and legal document processing that would be impossible without current agentic AI systems.

A useful mental model for AGI is child development. Just as a child can be left unsupervised for progressively longer periods, AI agents are seeing their autonomous runtimes increase. AGI arrives when it becomes economically profitable to let an AI work continuously without supervision, much like an independent adult.

The ultimate goal for leading labs isn't just creating AGI, but automating the process of AI research itself. By replacing human researchers with millions of "AI researchers," they aim to trigger a "fast takeoff" or recursive self-improvement. This makes automating high-level programming a key strategic milestone.

Dr. Li views the distinction between AI and AGI as largely semantic and market-driven, rather than a clear scientific threshold. The original goal of AI research, dating back to Turing, was to create machines that can think and act like humans. The term "AGI" doesn't fundamentally change this North Star for scientists.