Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Companies like Meta are engaging in "chart crimes" to frame new models in the best possible light. By selectively highlighting winning benchmarks (e.g., in blue), they create a visual impression of superiority, even when the model underperforms in other key areas. This signals that benchmarks are becoming marketing tools rather than objective measures.

Related Insights

The proliferation of AI leaderboards incentivizes companies to optimize models for specific benchmarks. This creates a risk of "acing the SATs" where models excel on tests but don't necessarily make progress on solving real-world problems. This focus on gaming metrics could diverge from creating genuine user value.

Public leaderboards like LM Arena are becoming unreliable proxies for model performance. Teams implicitly or explicitly "benchmark" by optimizing for specific test sets. The superior strategy is to focus on internal, proprietary evaluation metrics and use public benchmarks only as a final, confirmatory check, not as a primary development target.

Current AI benchmarks have become targets for competition, an example of Goodhart's Law. Models are optimized to top leaderboards rather than develop the general capabilities the benchmarks were designed to measure, creating a false sense of progress and failing to predict real-world performance.

There's a significant gap between AI performance on structured benchmarks and its real-world utility. A randomized controlled trial (RCT) found that open-source software developers were actually slowed down by 20% when using AI assistants, despite being miscalibrated to believe the tools were helping. This highlights the limitations of current evaluation methods.

Just as standardized tests fail to capture a student's full potential, AI benchmarks often don't reflect real-world performance. The true value comes from the 'last mile' ingenuity of productization and workflow integration, not just raw model scores, which can be misleading.

The gap between benchmark scores and real-world performance suggests labs achieve high scores by distilling superior models or training for specific evals. This makes benchmarks a poor proxy for genuine capability, a skepticism that should be applied to all new model releases.

Don't trust academic benchmarks. Labs often "hill climb" or game them for marketing purposes, which doesn't translate to real-world capability. Furthermore, many of these benchmarks contain incorrect answers and messy data, making them an unreliable measure of true AI advancement.

Meta's Muse Spark model card highlighted its top score in blue, implying overall superiority. Critics called this a "chart crime," as the model underperformed on other key benchmarks. This marketing tactic selectively visualizes data to create a false impression of a model's capabilities relative to competitors.

AI labs often use different, optimized prompting strategies when reporting performance, making direct comparisons impossible. For example, Google used an unpublished 32-shot chain-of-thought method for Gemini 1.0 to boost its MMLU score. This highlights the need for neutral third-party evaluation.

Popular AI coding benchmarks can be deceptive because they prioritize task completion over efficiency. A model that uses significantly more tokens and time to reach a solution is fundamentally inferior to one that delivers an elegant result faster, even if both complete the task.