AI labs often use different, optimized prompting strategies when reporting performance, making direct comparisons impossible. For example, Google used an unpublished 32-shot chain-of-thought method for Gemini 1.0 to boost its MMLU score. This highlights the need for neutral third-party evaluation.

Related Insights

The proliferation of AI leaderboards incentivizes companies to optimize models for specific benchmarks. This creates a risk of "acing the SATs" where models excel on tests but don't necessarily make progress on solving real-world problems. This focus on gaming metrics could diverge from creating genuine user value.

Public leaderboards like LM Arena are becoming unreliable proxies for model performance. Teams implicitly or explicitly "benchmark" by optimizing for specific test sets. The superior strategy is to focus on internal, proprietary evaluation metrics and use public benchmarks only as a final, confirmatory check, not as a primary development target.

To ensure they are testing the same models available to the public, they register anonymous accounts to run evals. This prevents labs from providing specially tuned private endpoints that perform better than their publicly available APIs, thereby maintaining the integrity of their independent analysis.

As benchmarks become standard, AI labs optimize models to excel at them, leading to score inflation without necessarily improving generalized intelligence. The solution isn't a single perfect test, but continuously creating new evals that measure capabilities relevant to real-world user needs.

Just as standardized tests fail to capture a student's full potential, AI benchmarks often don't reflect real-world performance. The true value comes from the 'last mile' ingenuity of productization and workflow integration, not just raw model scores, which can be misleading.

Arena differentiates from competitors like Artificial Analysis by evaluating models on organic, user-generated prompts. This provides a level of real-world relevance and data diversity that platforms using pre-generated test cases or rerunning public benchmarks cannot replicate.

Seemingly simple benchmarks yield wildly different results if not run under identical conditions. Third-party evaluators must run tests themselves because labs often use optimized prompts to inflate scores. Even then, challenges like parsing inconsistent answer formats make truly fair comparison a significant technical hurdle.

Don't trust academic benchmarks. Labs often "hill climb" or game them for marketing purposes, which doesn't translate to real-world capability. Furthermore, many of these benchmarks contain incorrect answers and messy data, making them an unreliable measure of true AI advancement.

To ensure they're testing publicly available models, Artificial Analysis creates anonymous accounts to run benchmarks without the provider's knowledge. Labs agree to this policy because it guarantees fairness and prevents their competitors from receiving special treatment or manipulating results, creating a stable, trusted equilibrium.

Good Star Labs found GPT-5's performance in their Diplomacy game skyrocketed with optimized prompts, moving it from the bottom to the top. This shows a model's inherent capability can be masked or revealed by its prompt, making "best model" a context-dependent title rather than an absolute one.