Even for complex, multi-hour tasks requiring millions of tokens, current AI agents are at least an order of magnitude cheaper than paying a human with relevant expertise. This significant cost advantage suggests that economic viability will not be a near-term bottleneck for deploying AI on increasingly sophisticated tasks.
While the long-term trend for AI capability shows a seven-month doubling time, data since 2024 suggests an acceleration to a four-month doubling time. This faster pace has been a much better predictor of recent model performance, indicating a potential shift to a super-exponential trajectory.
A major challenge for the 'time horizon' metric is its cost. As AI capabilities improve, the tasks needed to benchmark them grow from hours to weeks or months. The cost of paying human experts for these long durations to establish a baseline becomes extremely high, threatening the long-term viability of this evaluation method.
The choice to benchmark AI on software engineering, cybersecurity, and AI R&D tasks is deliberate. These domains are considered most relevant to threat models where AI systems could accelerate their own development, leading to a rapid, potentially catastrophic increase in capabilities. The research is directly tied to assessing existential risk.
Analysis of model performance reveals a distinct shift with GPT-4 and subsequent models. These newer models are much more correlated with each other in the tasks they succeed or fail on compared to the pre-GPT-4 era. This could suggest a convergence in training data, architectures, or agent scaffolding methodologies across different labs.
METR's research reveals a consistent, exponential trend in AI capabilities over the last five years. When measured by the length of tasks an AI can complete (based on human completion time), this 'time horizon' has been doubling approximately every seven months, providing a single, robust metric for tracking progress.
Human time to completion is a strong predictor of AI success, but it's not perfect. METR's analysis found that a task's qualitative 'messiness'—how clean and simple it is versus tricky and rough—also independently predicts whether an AI will succeed. This suggests that pure task length doesn't capture all aspects of difficulty for AIs.
The surprisingly smooth, exponential trend in AI capabilities is viewed as more than just a technical machine learning phenomenon. It reflects broader economic dynamics, such as competition between firms, resource allocation, and investment cycles. This economic underpinning suggests the trend may be more robust and systematic than if it were based on isolated technical breakthroughs alone.
While the 'time horizon' metric effectively tracks AI capability, it's unclear at what point it signals danger. Researchers don't know if the critical threshold for AI-driven R&D acceleration is a 40-hour task, a week-long task, or something else. This gap makes it difficult to translate current capability measurements into a concrete risk timeline.
There's a significant gap between AI performance on structured benchmarks and its real-world utility. A randomized controlled trial (RCT) found that open-source software developers were actually slowed down by 20% when using AI assistants, despite being miscalibrated to believe the tools were helping. This highlights the limitations of current evaluation methods.
To isolate for agency rather than just knowledge, METR's 'time horizon' metric measures how long tasks take for human experts who already possess the required background knowledge. This methodology aims to reconcile why models can be 'geniuses' on knowledge-intensive tasks (like IMO problems) but 'idiots' on simple, multi-step actions.
