Anthropic CEO Dario Amadei's two-year AGI timeline, far shorter than DeepMind's five-year estimate, is rooted in his prediction that AI will automate most software engineering within 12 months. This "code AGI" is seen as the inflection point for a recursive feedback loop where AI rapidly improves itself.
Andrew Wilkinson argues that advanced AI models have achieved AGI-like capabilities in programming. He quotes Anthropic's CEO, suggesting that the role of a programmer is shifting to that of an architect, and many current programmers are in denial because their paycheck depends on not understanding this shift.
Researchers from Anthropic, XAI, and Google are publicly stating that Claude's advanced coding abilities feel like a form of AGI, capable of replicating a year's worth of human engineering work in just one hour.
Julian Schrittwieser, a key researcher from Anthropic and formerly Google DeepMind, forecasts that extrapolating current AI progress suggests models will achieve full-day autonomy and match human experts across many industries by mid-2026. This timeline is much shorter than many anticipate.
Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.
There's a stark contrast in AGI timeline predictions. Newcomers and enthusiasts often predict AGI within months or a few years. However, the field's most influential figures, like Ilya Sutskever and Andrej Karpathy, are now signaling that true AGI is likely decades away, suggesting the current paradigm has limitations.
Companies like OpenAI and Anthropic are not just building better models; their strategic goal is an "automated AI researcher." The ability for an AI to accelerate its own development is viewed as the key to getting so far ahead that no competitor can catch up.
A key strategy for labs like Anthropic is automating AI research itself. By building models that can perform the tasks of AI researchers, they aim to create a feedback loop that dramatically accelerates the pace of innovation.
Anthropic's resource allocation is guided by one principle: expecting rapid, transformative AI progress. This leads them to concentrate bets on areas with the highest leverage in such a future: software engineering to accelerate their own development, and AI safety, which becomes paramount as models become more powerful and autonomous.
The ultimate goal for leading labs isn't just creating AGI, but automating the process of AI research itself. By replacing human researchers with millions of "AI researchers," they aim to trigger a "fast takeoff" or recursive self-improvement. This makes automating high-level programming a key strategic milestone.
Sam Altman's goal of an "automated AI research intern" by 2026 and a full "researcher" by 2028 is not about simple task automation. It is a direct push toward creating recursively self-improving systems—AI that can discover new methods to improve AI models, aiming for an "intelligence explosion."