The pursuit of AGI may mirror the history of the Turing Test. Once ChatGPT clearly passed the test, the milestone was dismissed as unimportant. Similarly, as AI achieves what we now call AGI, society will likely move the goalposts and decide our original definition was never the true measure of intelligence.
As AI models achieve previously defined benchmarks for intelligence (e.g., reasoning), their failure to generate transformative economic value reveals those benchmarks were insufficient. This justifies 'shifting the goalposts' for AGI. It is a rational response to realizing our understanding of intelligence was too narrow. Progress in impressiveness doesn't equate to progress in usefulness.
Sci-fi predicted parades when AI passed the Turing test, but in reality, it happened with models like GPT-3.5 and the world barely noticed. This reveals humanity's incredible ability to quickly normalize profound technological leaps and simply move the goalposts for what feels revolutionary.
Today's AI models have surpassed the definition of Artificial General Intelligence (AGI) that was commonly accepted by AI researchers just over a decade ago. The debate continues because the goalposts for what constitutes "true" AGI have been moved.
OpenAI's CEO believes the term "AGI" is ill-defined and its milestone may have passed without fanfare. He proposes focusing on "superintelligence" instead, defining it as an AI that can outperform the best human at complex roles like CEO or president, creating a clearer, more impactful threshold.
The definition of AGI is a moving goalpost. Scott Wu argues that today's AI meets the standards that would have been considered AGI a decade ago. As technology automates tasks, human work simply moves to a higher level of abstraction, making percentage-based definitions of AGI flawed.
The discourse around AGI is caught in a paradox. Either it is already emerging, in which case it's less a cataclysmic event and more an incremental software improvement, or it remains a perpetually receding future goal. This captures the tension between the hype of superhuman intelligence and the reality of software development.
Sequoia highlights the "AI effect": once an AI capability becomes mainstream, we stop calling it AI and give it a specific name, thereby moving the goalposts for "true" AI. This historical pattern of downplaying achievements is a key reason they are explicitly declaring the arrival of AGI.
Dan Siroker argues AGI has already been achieved, but we're reluctant to admit it. He claims major AI labs have 'perverse incentives' to keep moving the goalposts, such as avoiding contractual triggers (like OpenAI with Microsoft) or to continue the lucrative AI funding race.
The pursuit of AGI is misguided. The real value of AI lies in creating reliable, interpretable, and scalable software systems that solve specific problems, much like traditional engineering. The goal should be "Artificial Programmable Intelligence" (API), not AGI.
The race to manage AGI is hampered by a philosophical problem: there's no consensus definition for what it is. We might dismiss true AGI's outputs as "hallucinations" because they don't fit our current framework, making it impossible to know when the threshold from advanced AI to true general intelligence has actually been crossed.