The most significant gap in AI research is its focus on academic evaluations instead of tasks customers value, like medical diagnosis or legal drafting. The solution is using real-world experts to define benchmarks that measure performance on economically relevant work.
While public benchmarks show general model improvement, they are almost orthogonal to enterprise adoption. Enterprises don't care about general capabilities; they need near-perfect precision on highly specific, internal workflows. This requires extensive fine-tuning and validation, not chasing leaderboard scores.
To measure an AI model's economic value, survey domain experts on how they allocate their time across various tasks. This time-allocation data serves as a proxy for the economic weight of each task, against which the model's performance can be scored.
There's a significant gap between AI performance on structured benchmarks and its real-world utility. A randomized controlled trial (RCT) found that open-source software developers were actually slowed down by 20% when using AI assistants, despite being miscalibrated to believe the tools were helping. This highlights the limitations of current evaluation methods.
Standardized benchmarks for AI models are largely irrelevant for business applications. Companies need to create their own evaluation systems tailored to their specific industry, workflows, and use cases to accurately assess which new model provides a tangible benefit and ROI.
AI struggles with long-horizon tasks not just due to technical limits, but because we lack good ways to measure performance. Once effective evaluations (evals) for these capabilities exist, researchers can rapidly optimize models against them, accelerating progress significantly.
Just as standardized tests fail to capture a student's full potential, AI benchmarks often don't reflect real-world performance. The true value comes from the 'last mile' ingenuity of productization and workflow integration, not just raw model scores, which can be misleading.
Traditional AI benchmarks are seen as increasingly incremental and less interesting. The new frontier for evaluating a model's true capability lies in applied, complex tasks that mimic real-world interaction, such as building in Minecraft (MC Bench) or managing a simulated business (VendingBench), which are more revealing of raw intelligence.
OpenAI's new GDP-val benchmark evaluates models on complex, real-world knowledge work tasks, not abstract IQ tests. This pivot signifies that the true measure of AI progress is now its ability to perform economically valuable human jobs, making performance metrics directly comparable to professional output.
Standardized AI benchmarks are saturated and becoming less relevant for real-world use cases. The true measure of a model's improvement is now found in custom, internal evaluations (evals) created by application-layer companies. Progress for a legal AI tool, for example, is a more meaningful indicator than a generic test score.
Instead of waiting for external reports, companies should develop their own AI model evaluations. By defining key tasks for specific roles and testing new models against them with standard prompts, businesses can create a relevant, internal benchmark.