Google's Project Genie can generate playable game worlds from text prompts, a feat that would have seemed like AGI recently. However, users' expectations immediately shift to the next challenge: demanding AI-generated game mechanics like timers, scoring, and complex interactions.

Related Insights

Today's AI models have surpassed the definition of Artificial General Intelligence (AGI) that was commonly accepted by AI researchers just over a decade ago. The debate continues because the goalposts for what constitutes "true" AGI have been moved.

A consortium including leaders from Google and DeepMind has defined AGI as matching the cognitive versatility of a "well-educated adult" across 10 domains. This new framework moves beyond abstract debate, showing a concrete 30-point leap in AGI score from GPT-4 (27%) to a projected GPT-5 (57%).

Static benchmarks are easily gamed. Dynamic environments like the game Diplomacy force models to negotiate, strategize, and even lie, offering a richer, more realistic evaluation of their capabilities beyond pure performance metrics like reasoning or coding.

Demis Hassabis describes an innovative training method combining two AI projects: Genie, which generates interactive worlds, and Simmer, an AI agent. By placing a Simmer agent inside a world created by Genie, they can create a dynamic feedback loop with virtually infinite, increasingly complex training scenarios.

Language is just one 'keyhole' into intelligence. True artificial general intelligence (AGI) requires 'world modeling'—a spatial intelligence that understands geometry, physics, and actions. This capability to represent and interact with the state of the world is the next critical phase of AI development beyond current language models.

Hassabis argues AGI isn't just about solving existing problems. True AGI must demonstrate the capacity for breakthrough creativity, like Einstein developing a new theory of physics or Picasso creating a new art genre. This sets a much higher bar than current systems.

Silicon Valley now measures the intelligence of large language models like ChatGPT by their ability to play Pokémon. The game's complex mazes, puzzles, and strategic decisions provide a more robust and comprehensive benchmark for modern AI capabilities than traditional tests like chess, Jeopardy, or the Turing test.

The pursuit of AGI may mirror the history of the Turing Test. Once ChatGPT clearly passed the test, the milestone was dismissed as unimportant. Similarly, as AI achieves what we now call AGI, society will likely move the goalposts and decide our original definition was never the true measure of intelligence.

The primary constraint on output is no longer a tool's capability but the user's skill in prompting it. This is exemplified by a developer who created a complex real-time strategy (RTS) game from scratch in one week by prompting an AI model, having not written a single line of code himself in two months.

Google DeepMind CEO Demis Hassabis argues that today's large models are insufficient for AGI. He believes progress requires reintroducing algorithmic techniques from systems like AlphaGo, specifically planning and search, to enable more robust reasoning and problem-solving capabilities beyond simple pattern matching.