A major challenge for the 'time horizon' metric is its cost. As AI capabilities improve, the tasks needed to benchmark them grow from hours to weeks or months. The cost of paying human experts for these long durations to establish a baseline becomes extremely high, threatening the long-term viability of this evaluation method.
METR's research reveals a consistent, exponential trend in AI capabilities over the last five years. When measured by the length of tasks an AI can complete (based on human completion time), this 'time horizon' has been doubling approximately every seven months, providing a single, robust metric for tracking progress.
Human time to completion is a strong predictor of AI success, but it's not perfect. METR's analysis found that a task's qualitative 'messiness'—how clean and simple it is versus tricky and rough—also independently predicts whether an AI will succeed. This suggests that pure task length doesn't capture all aspects of difficulty for AIs.
While the 'time horizon' metric effectively tracks AI capability, it's unclear at what point it signals danger. Researchers don't know if the critical threshold for AI-driven R&D acceleration is a 40-hour task, a week-long task, or something else. This gap makes it difficult to translate current capability measurements into a concrete risk timeline.
A key metric for AI progress is the size of a task (measured in human-hours) it can complete. This metric is currently doubling every four to seven months. At this exponential rate, an AI that handles a two-hour task today will be able to manage a two-week project autonomously within two years.
To isolate for agency rather than just knowledge, METR's 'time horizon' metric measures how long tasks take for human experts who already possess the required background knowledge. This methodology aims to reconcile why models can be 'geniuses' on knowledge-intensive tasks (like IMO problems) but 'idiots' on simple, multi-step actions.
Progress in complex, long-running agentic tasks is better measured by tokens consumed rather than raw time. Improving token efficiency, as seen from GPT-5 to 5.1, directly enables more tool calls and actions within a feasible operational budget, unlocking greater capabilities.
Traditional AI benchmarks are seen as increasingly incremental and less interesting. The new frontier for evaluating a model's true capability lies in applied, complex tasks that mimic real-world interaction, such as building in Minecraft (MC Bench) or managing a simulated business (VendingBench), which are more revealing of raw intelligence.
Even for complex, multi-hour tasks requiring millions of tokens, current AI agents are at least an order of magnitude cheaper than paying a human with relevant expertise. This significant cost advantage suggests that economic viability will not be a near-term bottleneck for deploying AI on increasingly sophisticated tasks.
OpenAI's new GDP-val benchmark evaluates models on complex, real-world knowledge work tasks, not abstract IQ tests. This pivot signifies that the true measure of AI progress is now its ability to perform economically valuable human jobs, making performance metrics directly comparable to professional output.
Standardized AI benchmarks are saturated and becoming less relevant for real-world use cases. The true measure of a model's improvement is now found in custom, internal evaluations (evals) created by application-layer companies. Progress for a legal AI tool, for example, is a more meaningful indicator than a generic test score.