Sequoia highlights the "AI effect": once an AI capability becomes mainstream, we stop calling it AI and give it a specific name, thereby moving the goalposts for "true" AI. This historical pattern of downplaying achievements is a key reason they are explicitly declaring the arrival of AGI.

Related Insights

As AI models achieve previously defined benchmarks for intelligence (e.g., reasoning), their failure to generate transformative economic value reveals those benchmarks were insufficient. This justifies 'shifting the goalposts' for AGI. It is a rational response to realizing our understanding of intelligence was too narrow. Progress in impressiveness doesn't equate to progress in usefulness.

Today's AI models have surpassed the definition of Artificial General Intelligence (AGI) that was commonly accepted by AI researchers just over a decade ago. The debate continues because the goalposts for what constitutes "true" AGI have been moved.

OpenAI's CEO believes the term "AGI" is ill-defined and its milestone may have passed without fanfare. He proposes focusing on "superintelligence" instead, defining it as an AI that can outperform the best human at complex roles like CEO or president, creating a clearer, more impactful threshold.

The definition of AGI is a moving goalpost. Scott Wu argues that today's AI meets the standards that would have been considered AGI a decade ago. As technology automates tasks, human work simply moves to a higher level of abstraction, making percentage-based definitions of AGI flawed.

The discourse around AGI is caught in a paradox. Either it is already emerging, in which case it's less a cataclysmic event and more an incremental software improvement, or it remains a perpetually receding future goal. This captures the tension between the hype of superhuman intelligence and the reality of software development.

Sequoia's proclamation that AGI has arrived is a strategic move to energize founders. The firm argues that today's AI, particularly long-horizon agents, is already capable enough to solve major problems, urging entrepreneurs to stop waiting for a future breakthrough and start building now.

The term "AI" is a moving target. Technologies like databases or even machine learning were once considered AI but are now just "software." In common usage, AI simply refers to the newest, most novel computational capabilities, and the label will fade as they become commonplace.

Dan Siroker argues AGI has already been achieved, but we're reluctant to admit it. He claims major AI labs have 'perverse incentives' to keep moving the goalposts, such as avoiding contractual triggers (like OpenAI with Microsoft) or to continue the lucrative AI funding race.

The race to manage AGI is hampered by a philosophical problem: there's no consensus definition for what it is. We might dismiss true AGI's outputs as "hallucinations" because they don't fit our current framework, making it impossible to know when the threshold from advanced AI to true general intelligence has actually been crossed.

In the 2010s, the term "AI" was perceived as hype. To gain serious traction, the field was deliberately rebranded as "Machine Learning." Now, the cycle has reversed, and "AI" is once again the preferred term, highlighting the cyclical and strategic nature of technology branding.