While the long-term trend for AI capability shows a seven-month doubling time, data since 2024 suggests an acceleration to a four-month doubling time. This faster pace has been a much better predictor of recent model performance, indicating a potential shift to a super-exponential trajectory.

Related Insights

METR's research reveals a consistent, exponential trend in AI capabilities over the last five years. When measured by the length of tasks an AI can complete (based on human completion time), this 'time horizon' has been doubling approximately every seven months, providing a single, robust metric for tracking progress.

The surprisingly smooth, exponential trend in AI capabilities is viewed as more than just a technical machine learning phenomenon. It reflects broader economic dynamics, such as competition between firms, resource allocation, and investment cycles. This economic underpinning suggests the trend may be more robust and systematic than if it were based on isolated technical breakthroughs alone.

A key metric for AI progress is the size of a task (measured in human-hours) it can complete. This metric is currently doubling every four to seven months. At this exponential rate, an AI that handles a two-hour task today will be able to manage a two-week project autonomously within two years.

Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.

AI labs like Anthropic find that mid-tier models can be trained with reinforcement learning to outperform their largest, most expensive models in just a few months, accelerating the pace of capability improvements.

AI progress was expected to stall in 2024-2025 due to hardware limitations on pre-training scaling laws. However, breakthroughs in post-training techniques like reasoning and test-time compute provided a new vector for improvement, bridging the gap until next-generation chips like NVIDIA's Blackwell arrived.

When developing AI-powered tools, don't be constrained by current model limitations. Given the exponential improvement curve, design your product for the capabilities you anticipate models will have in six months. This ensures your product is perfectly timed to shine when the underlying tech catches up.

Kevin Rose argues against forming fixed opinions on AI capabilities. The technology leapfrogs every 4-8 weeks, meaning a developer who found AI coding assistants "horrible" three months ago is judging a tool that is now 3-4 times better. One must continuously re-evaluate AI tools to stay current.

Despite a media narrative of AI stagnation, the reality is an accelerating arms race. A rapid-fire succession of major model updates from OpenAI (GPT-5.2), Google (Gemini 3), and Anthropic (Claude 4.5) within just months proves the pace of innovation is increasing, not slowing down.

An analysis of AI model performance shows a 2-2.5x improvement in intelligence scores across all major players within the last year. This rapid advancement is leading to near-perfect scores on existing benchmarks, indicating a need for new, more challenging tests to measure future progress.