Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

AI companies exploit the lack of a scientific consensus on 'AGI' (Artificial General Intelligence) by defining it differently to suit their audience—as a cure-all for regulators, a helpful assistant for consumers, or a revenue machine for investors.

Related Insights

Today's AI models have surpassed the definition of Artificial General Intelligence (AGI) that was commonly accepted by AI researchers just over a decade ago. The debate continues because the goalposts for what constitutes "true" AGI have been moved.

OpenAI's CEO believes the term "AGI" is ill-defined and its milestone may have passed without fanfare. He proposes focusing on "superintelligence" instead, defining it as an AI that can outperform the best human at complex roles like CEO or president, creating a clearer, more impactful threshold.

Naming AI research teams with terms like "AGI" is more about signaling a long-term "north star" and creating "vibes" to attract ambitious talent, rather than reflecting a concrete, step-by-step plan to achieve artificial general intelligence.

The definition of AGI is a moving goalpost. Scott Wu argues that today's AI meets the standards that would have been considered AGI a decade ago. As technology automates tasks, human work simply moves to a higher level of abstraction, making percentage-based definitions of AGI flawed.

Dr. Li views the distinction between AI and AGI as largely semantic and market-driven, rather than a clear scientific threshold. The original goal of AI research, dating back to Turing, was to create machines that can think and act like humans. The term "AGI" doesn't fundamentally change this North Star for scientists.

Labs like DeepMind and OpenAI state that building a machine that can do anything a human brain can is their core mission. However, many experts believe the idea is ridiculous, as the path isn't clear. This frames the pursuit as an article of faith rather than a concrete scientific roadmap.

Sequoia highlights the "AI effect": once an AI capability becomes mainstream, we stop calling it AI and give it a specific name, thereby moving the goalposts for "true" AI. This historical pattern of downplaying achievements is a key reason they are explicitly declaring the arrival of AGI.

Dan Siroker argues AGI has already been achieved, but we're reluctant to admit it. He claims major AI labs have 'perverse incentives' to keep moving the goalposts, such as avoiding contractual triggers (like OpenAI with Microsoft) or to continue the lucrative AI funding race.

The race to manage AGI is hampered by a philosophical problem: there's no consensus definition for what it is. We might dismiss true AGI's outputs as "hallucinations" because they don't fit our current framework, making it impossible to know when the threshold from advanced AI to true general intelligence has actually been crossed.

The philosophical AGI debate is being replaced by a pragmatic focus on 'Work AGI.' Companies like OpenAI are orienting their entire strategy around automating and accelerating the economy by executing complex chains of knowledge work tasks, not just single, discrete actions.