/
© 2026 RiffOn. All rights reserved.
  1. Latent Space: The AI Engineer Podcast
  2. Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith
Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast · Jan 8, 2026

Artificial Analysis founders discuss their independent LLM benchmarks, business model, and new evals for hallucination, agents, and openness.

Artificial Analysis Monetizes via Enterprise Subscriptions, Not by Charging for Public Rankings

To maintain independence and trust, their public benchmarks are free and cannot be influenced by payments. The company generates revenue by selling detailed reports and insight subscriptions to enterprises, and by conducting private, custom benchmarking for AI companies, separating their public good from their commercial offerings.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

The Omniscience Index Penalizes LLM Hallucination by Rewarding "I Don't Know" Answers

Traditional benchmarks incentivize guessing by only rewarding correct answers. The Omniscience Index directly combats hallucination by subtracting points for incorrect factual answers. This creates a powerful incentive for model developers to train their systems to admit when they lack knowledge, improving reliability.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

The Binary "Reasoning vs. Non-Reasoning" Model Distinction Is Now Obsolete

Classifying a model as "reasoning" based on a chain-of-thought step is no longer useful. With massive differences in token efficiency, a so-called "reasoning" model can be faster and cheaper than a "non-reasoning" one for a given task. The focus is shifting to a continuous spectrum of capability versus overall cost.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

A "Mystery Shopper" Policy Prevents LLM Providers from Gaming Benchmarks

To ensure they're testing publicly available models, Artificial Analysis creates anonymous accounts to run benchmarks without the provider's knowledge. Labs agree to this policy because it guarantees fairness and prevents their competitors from receiving special treatment or manipulating results, creating a stable, trusted equilibrium.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

Popular LLM Benchmarks Inadvertently Cause AI Labs to "Teach to the Test"

Once an evaluation becomes an industry standard, AI labs focus research on improving scores for that specific task. This can lead to models excelling at narrow capabilities, like competition math, without a corresponding increase in general intelligence or real-world usefulness, a classic example of Goodhart's Law.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

"Openness Index" Replaces "Open Source" by Scoring Models on Data and Method Transparency

To clarify the ambiguous "open source" label, the Openness Index scores models across multiple dimensions. It evaluates not just if the weights are available, but also the degree to which training data, methodology, and code are disclosed. This creates a more useful spectrum of openness, distinguishing "open weights" from true "open science."

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

LLM Factual Knowledge Correlates Strongly with Total Parameter Count, Not Active Parameters

Artificial Analysis found that a model's ability to recall facts is a strong function of its total size, even for sparse Mixture-of-Experts (MoE) models. This suggests that the vast number of "inactive" parameters in MoE architectures contribute significantly to the model's overall knowledge base, not just the active ones per token.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

Artificial Analysis Began as a Side Project to Solve the Founders' Own LLM Benchmarking Needs

The founders built the tool because they needed independent, comparative data on LLM performance vs. cost for their own legal AI startup. It only became a full-time company after its utility grew with the explosion of new models, demonstrating how solving a personal niche problem can address a wider market need.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

Inconsistent Prompting and Response Parsing Invalidate Most Self-Reported LLM Benchmarks

Seemingly simple benchmarks yield wildly different results if not run under identical conditions. Third-party evaluators must run tests themselves because labs often use optimized prompts to inflate scores. Even then, challenges like parsing inconsistent answer formats make truly fair comparison a significant technical hurdle.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

Minimalist Agent Frameworks Can Unlock Higher Performance Than Native Web Chatbots

When testing models on the GDPVal benchmark, Artificial Analysis's simple agent harness allowed models like Claude to outperform their official web chatbot counterparts. This implies that bespoke chatbot environments are often constrained for cost or safety, limiting a model's full agentic capabilities which developers can unlock with custom tooling.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

For AI Agents, "Number of Turns" Is Becoming a More Important Metric Than Token Cost

In complex, multi-step tasks, overall cost is determined by tokens per turn and the total number of turns. A more intelligent, expensive model can be cheaper overall if it solves a problem in two turns, while a cheaper model might take ten turns, accumulating higher total costs. Future benchmarks must measure this turn efficiency.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago

AI Costs Follow a "Smiling Curve": Unit Intelligence is Cheaper, but Total Spend Soars

A paradox exists where the cost for a fixed level of AI capability (e.g., GPT-4 level) has dropped 100-1000x. However, overall enterprise spend is increasing because applications now use frontier models with massive contexts and multi-step agentic workflows, creating huge multipliers on token usage that drive up total costs.

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith thumbnail

Artificial Analysis: The Independent LLM Analysis House — with George Cameron and Micah-Hill Smith

Latent Space: The AI Engineer Podcast·a month ago