Sequoia's proclamation that AGI has arrived is a strategic move to energize founders. The firm argues that today's AI, particularly long-horizon agents, is already capable enough to solve major problems, urging entrepreneurs to stop waiting for a future breakthrough and start building now.

Related Insights

The belief that a future Artificial General Intelligence (AGI) will solve all problems acts as a rationalization for inaction. This "messiah" view is dangerous because the AI revolution is continuous and happening now. Deferring action sacrifices the opportunity to build crucial, immediate capabilities and expertise.

Instead of a single "AGI" event, AI progress is better understood in three stages. We're in the "powerful tools" era. The next is "powerful agents" that act autonomously. The final stage, "autonomous organizations" that outcompete human-led ones, is much further off due to capability "spikiness."

Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.

According to Sequoia's Pat Grady, the best time to start an AI application company is now. The foundational playbook has been established through three key technological leaps: pre-training (ChatGPT), reasoning (01), and long-horizon agency (Claude). This clarity provides a stable platform for building valuable applications.

Contrary to the view that useful AI agents are a decade away, Andrew Ng asserts that agentic workflows are already solving complex business problems. He cites examples from his portfolio in tariff compliance and legal document processing that would be impossible without current agentic AI systems.

The definition of AGI is a moving goalpost. Scott Wu argues that today's AI meets the standards that would have been considered AGI a decade ago. As technology automates tasks, human work simply moves to a higher level of abstraction, making percentage-based definitions of AGI flawed.

The continuous narrative that AGI is "right around the corner" is no longer just about technological optimism. It has become a financial necessity to justify over a trillion dollars in expended or committed capital, preventing a catastrophic collapse of investment in the AI sector.

Sequoia highlights the "AI effect": once an AI capability becomes mainstream, we stop calling it AI and give it a specific name, thereby moving the goalposts for "true" AI. This historical pattern of downplaying achievements is a key reason they are explicitly declaring the arrival of AGI.

Dan Siroker argues AGI has already been achieved, but we're reluctant to admit it. He claims major AI labs have 'perverse incentives' to keep moving the goalposts, such as avoiding contractual triggers (like OpenAI with Microsoft) or to continue the lucrative AI funding race.

The focus on achieving Artificial General Intelligence (AGI) is a distraction. Today's AI models are already so capable that they can fundamentally transform business operations and workflows if applied to the right use cases.