While the 'time horizon' metric effectively tracks AI capability, it's unclear at what point it signals danger. Researchers don't know if the critical threshold for AI-driven R&D acceleration is a 40-hour task, a week-long task, or something else. This gap makes it difficult to translate current capability measurements into a concrete risk timeline.

Related Insights

Coined in 1965, the "intelligence explosion" describes a runaway feedback loop. An AI capable of conducting AI research could use its intelligence to improve itself. This newly enhanced intelligence would make it even better at AI research, leading to exponential, uncontrollable growth in capability. This "fast takeoff" could leave humanity far behind in a very short period.

Unlike traditional engineering, breakthroughs in foundational AI research often feel binary. A model can be completely broken until a handful of key insights are discovered, at which point it suddenly works. This "all or nothing" dynamic makes it impossible to predict timelines, as you don't know if a solution is a week or two years away.

METR's research reveals a consistent, exponential trend in AI capabilities over the last five years. When measured by the length of tasks an AI can complete (based on human completion time), this 'time horizon' has been doubling approximately every seven months, providing a single, robust metric for tracking progress.

Fears of AI's 'recursive self-improvement' should be contextualized. Every major general-purpose technology, from iron to computers, has been used to improve itself. While AI's speed may differ, this self-catalyzing loop is a standard characteristic of transformative technologies and has not previously resulted in runaway existential threats.

A key metric for AI progress is the size of a task (measured in human-hours) it can complete. This metric is currently doubling every four to seven months. At this exponential rate, an AI that handles a two-hour task today will be able to manage a two-week project autonomously within two years.

Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.

The choice to benchmark AI on software engineering, cybersecurity, and AI R&D tasks is deliberate. These domains are considered most relevant to threat models where AI systems could accelerate their own development, leading to a rapid, potentially catastrophic increase in capabilities. The research is directly tied to assessing existential risk.

With past shifts like the internet or mobile, we understood the physical constraints (e.g., modem speeds, battery life). With generative AI, we lack a theoretical understanding of its scaling potential, making it impossible to forecast its ultimate capabilities beyond "vibes-based" guesses from experts.

While the long-term trend for AI capability shows a seven-month doubling time, data since 2024 suggests an acceleration to a four-month doubling time. This faster pace has been a much better predictor of recent model performance, indicating a potential shift to a super-exponential trajectory.

A major challenge for the 'time horizon' metric is its cost. As AI capabilities improve, the tasks needed to benchmark them grow from hours to weeks or months. The cost of paying human experts for these long durations to establish a baseline becomes extremely high, threatening the long-term viability of this evaluation method.