Teams often mistakenly debate between using offline evals or online production monitoring. This is a false choice. Evals are crucial for testing against known failure modes before deployment. Production monitoring is essential for discovering new, unexpected failure patterns from real user interactions. Both are required for a robust feedback loop.

Related Insights

Don't treat evals as a mere checklist. Instead, use them as a creative tool to discover opportunities. A well-designed eval can reveal that a product is underperforming for a specific user segment, pointing directly to areas for high-impact improvement that a simple "vibe check" would miss.

The word "evals" has been stretched to mean many different things: expert-written error analysis, PM-defined test cases, performance benchmarks, and LLM-based judges. This "semantic diffusion" causes confusion. Teams need to be specific about what part of the feedback loop they're discussing instead of using the generic term.

AI product quality is highly dependent on infrastructure reliability, which is less stable than traditional cloud services. Jared Palmer's team at Vercel monitored key metrics like 'error-free sessions' in near real-time. This intense, data-driven approach is crucial for building a reliable agentic product, as inference providers frequently drop requests.

Treating AI evaluation like a final exam is a mistake. For critical enterprise systems, evaluations should be embedded at every step of an agent's workflow (e.g., after planning, before action). This is akin to unit testing in classic software development and is essential for building trustworthy, production-ready agents.

Don't treat validation as a one-off task before development. The most successful products maintain a constant feedback loop with users to adapt to changing needs, regulations, and tastes. The worst mistake is to stop listening after the initial launch, as businesses that fail to adapt ultimately fail.

Building a functional AI agent is just the starting point. The real work lies in developing a set of evaluations ("evals") to test if the agent consistently behaves as expected. Without quantifying failures and successes against a standard, you're just guessing, not iteratively improving the agent's performance.

The common mistake in building AI evals is jumping straight to writing automated tests. The correct first step is a manual process called "error analysis" or "open coding," where a product expert reviews real user interaction logs to understand what's actually going wrong. This grounds your entire evaluation process in reality.

Developers often test AI systems with well-formed, correctly spelled questions. However, real users submit vague, typo-ridden, and ambiguous prompts. Directly analyzing these raw logs is the most crucial first step to understanding how your product fails in the real world and where to focus quality improvements.

You don't need to create an automated "LLM as a judge" for every potential failure. Many issues discovered during error analysis can be fixed with a simple prompt adjustment. Reserve the effort of building robust, automated evals for the 4-7 most persistent and critical failure modes that prompt changes alone cannot solve.

Despite mature backtesting frameworks, Intercom repeatedly sees promising offline results fail in production. The "messiness of real human interaction" is unpredictable, making at-scale A/B tests essential for validating AI performance improvements, even for changes as small as a tenth of a percentage point.