Julian Schrittwieser, a key researcher from Anthropic and formerly Google DeepMind, forecasts that extrapolating current AI progress suggests models will achieve full-day autonomy and match human experts across many industries by mid-2026. This timeline is much shorter than many anticipate.

Related Insights

The most immediate AI milestone is not singularity, but "Economic AGI," where AI can perform most virtual knowledge work better than humans. This threshold, predicted to arrive within 12-18 months, will trigger massive societal and economic shifts long before a "Terminator"-style superintelligence becomes a reality.

Block's CTO quantifies the impact of their internal AI agent, Goose. AI-forward engineering teams save 8-10 hours weekly, a figure he considers the absolute baseline. He notes, "this is the worst it will ever be," suggesting exponential gains are coming.

Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.

AI labs like Anthropic find that mid-tier models can be trained with reinforcement learning to outperform their largest, most expensive models in just a few months, accelerating the pace of capability improvements.

OpenAI's new GDPVal framework evaluates AI on real-world knowledge work. It found frontier models produce work rated equal to or better than human experts nearly 50% of the time, while being 100 times faster and cheaper. This provides a direct measure of impending economic transformation.

OpenAI announced goals for an AI research intern by 2026 and a fully autonomous researcher by 2028. This isn't just a scientific pursuit; it's a core business strategy to exponentially accelerate AI discovery by automating innovation itself, which they plan to sell as a high-priced agent.

The evolution of Tesla's Full Self-Driving offers a clear parallel for enterprise AI adoption. Initially, human oversight and frequent "disengagements" (interventions) will be necessary. As AI agents learn, the rate of disengagement will drop, signaling a shift from a co-pilot tool to a fully autonomous worker in specific professional domains.

A key strategy for labs like Anthropic is automating AI research itself. By building models that can perform the tasks of AI researchers, they aim to create a feedback loop that dramatically accelerates the pace of innovation.

A useful mental model for AGI is child development. Just as a child can be left unsupervised for progressively longer periods, AI agents are seeing their autonomous runtimes increase. AGI arrives when it becomes economically profitable to let an AI work continuously without supervision, much like an independent adult.

Anthropic's data reveals users are moving beyond AI as a creative partner and are now delegating entire tasks. This "directive automation" behavior jumped from 27% to 39% of conversations in just nine months, signaling rapidly growing trust in AI for autonomous work completion.