Human time to completion is a strong predictor of AI success, but it's not perfect. METR's analysis found that a task's qualitative 'messiness'—how clean and simple it is versus tricky and rough—also independently predicts whether an AI will succeed. This suggests that pure task length doesn't capture all aspects of difficulty for AIs.
While AI can attempt complex, hour-long tasks with 50% success, its reliability plummets for longer operations. For mission-critical enterprise use requiring 99.9% success, current AI can only reliably complete tasks taking about three seconds. This necessitates breaking large problems into many small, reliable micro-tasks.
AI excels where success is quantifiable (e.g., code generation). Its greatest challenge lies in subjective domains like mental health or education. Progress requires a messy, societal conversation to define 'success,' not just a developer-built technical leaderboard.
AI models show impressive performance on evaluation benchmarks but underwhelm in real-world applications. This gap exists because researchers, focused on evals, create reinforcement learning (RL) environments that mirror test tasks. This leads to narrow intelligence that doesn't generalize, a form of human-driven reward hacking.
An AI agent's failure on a complex task like tax preparation isn't due to a lack of intelligence. Instead, it's often blocked by a single, unpredictable "tiny thing," such as misinterpreting two boxes on a W4 form. This highlights that reliability challenges are granular and not always intuitive.
Generic evaluation metrics like "helpfulness" or "conciseness" are vague and untrustworthy. A better approach is to first perform manual error analysis to find recurring problems (e.g., "tour scheduling failures"). Then, build specific, targeted evaluations (evals) that directly measure the frequency of these concrete issues, making metrics meaningful.
Andrej Karpathy's 'Software 2.0' framework posits that AI automates tasks that are easily *verifiable*. This explains the 'jagged frontier' of AI progress: fields like math and code, where correctness is verifiable, advance rapidly. In contrast, creative and strategic tasks, where success is subjective and hard to verify, lag significantly behind.
To isolate for agency rather than just knowledge, METR's 'time horizon' metric measures how long tasks take for human experts who already possess the required background knowledge. This methodology aims to reconcile why models can be 'geniuses' on knowledge-intensive tasks (like IMO problems) but 'idiots' on simple, multi-step actions.
The test intentionally used a simple, conversational prompt one might give a colleague ("our blog is not good...make it better"). The models' varying success reveals that a key differentiator is the ability to interpret high-level intent and independently research best practices, rather than requiring meticulously detailed instructions.
Traditional AI benchmarks are seen as increasingly incremental and less interesting. The new frontier for evaluating a model's true capability lies in applied, complex tasks that mimic real-world interaction, such as building in Minecraft (MC Bench) or managing a simulated business (VendingBench), which are more revealing of raw intelligence.
Demis Hassabis identifies a key obstacle for AGI. Unlike in math or games where answers can be verified, the messy real world lacks clear success metrics. This makes it difficult for AI systems to use self-improvement loops, limiting their ability to learn and adapt outside of highly structured domains.