Despite strong benchmark scores, top Chinese AI models (from ZAI, Kimi, DeepSeek) are "nowhere close" to US models like Claude or Gemini on complex, real-world vision tasks, such as accurately reading a messy scanned document. This suggests benchmarks don't capture a significant real-world performance gap.
The performance gap between US and Chinese AI models may be widening due to second-order effects of chip controls. By limiting inference at scale, the controls reduce the volume of customer interactions and feedback Chinese firms receive. This starves them of the data needed to identify and patch model weaknesses on diverse, real-world tasks.
There's a significant gap between AI performance in simulated benchmarks and in the real world. Despite scoring highly on evaluations, AIs in real deployments make "silly mistakes that no human would ever dream of doing," suggesting that current benchmarks don't capture the messiness and unpredictability of reality.
Public leaderboards like LM Arena are becoming unreliable proxies for model performance. Teams implicitly or explicitly "benchmark" by optimizing for specific test sets. The superior strategy is to focus on internal, proprietary evaluation metrics and use public benchmarks only as a final, confirmatory check, not as a primary development target.
The most significant gap in AI research is its focus on academic evaluations instead of tasks customers value, like medical diagnosis or legal drafting. The solution is using real-world experts to define benchmarks that measure performance on economically relevant work.
Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.
Just as standardized tests fail to capture a student's full potential, AI benchmarks often don't reflect real-world performance. The true value comes from the 'last mile' ingenuity of productization and workflow integration, not just raw model scores, which can be misleading.
Seemingly simple benchmarks yield wildly different results if not run under identical conditions. Third-party evaluators must run tests themselves because labs often use optimized prompts to inflate scores. Even then, challenges like parsing inconsistent answer formats make truly fair comparison a significant technical hurdle.
Don't trust academic benchmarks. Labs often "hill climb" or game them for marketing purposes, which doesn't translate to real-world capability. Furthermore, many of these benchmarks contain incorrect answers and messy data, making them an unreliable measure of true AI advancement.
While AI labs tout performance on standardized tests like math olympiads, these metrics often don't correlate with real-world usefulness or qualitative user experience. Users may prefer a model like Anthropic's Claude for its conversational style, a factor not measured by benchmarks.
AI labs often use different, optimized prompting strategies when reporting performance, making direct comparisons impossible. For example, Google used an unpublished 32-shot chain-of-thought method for Gemini 1.0 to boost its MMLU score. This highlights the need for neutral third-party evaluation.