The common mistake in building AI evals is jumping straight to writing automated tests. The correct first step is a manual process called "error analysis" or "open coding," where a product expert reviews real user interaction logs to understand what's actually going wrong. This grounds your entire evaluation process in reality.

Related Insights

AI interactions often involve multiple steps (e.g., user prompt, tool calls, retrieval). When an error occurs, the entire chain can fail. The most efficient debugging heuristic is to analyze the sequence and stop at the very first mistake. Focusing on this "most upstream problem" addresses the root cause, as downstream failures are merely symptoms.

Generic evaluation metrics like "helpfulness" or "conciseness" are vague and untrustworthy. A better approach is to first perform manual error analysis to find recurring problems (e.g., "tour scheduling failures"). Then, build specific, targeted evaluations (evals) that directly measure the frequency of these concrete issues, making metrics meaningful.

When conducting manual "open coding" for AI evals, teams often get bogged down by trying to reach a consensus. Instead, appoint a single person with deep domain expertise (often the product manager) to be the "benevolent dictator," making the final judgment calls on error categorization. This makes the process tractable and fast.

Don't ask an LLM to perform initial error analysis; it lacks the product context to spot subtle failures. Instead, have a human expert write detailed, freeform notes ("open codes"). Then, leverage an LLM's strength in synthesis to automatically categorize those hundreds of human-written notes into actionable failure themes ("axial codes").

Treating AI evaluation like a final exam is a mistake. For critical enterprise systems, evaluations should be embedded at every step of an agent's workflow (e.g., after planning, before action). This is akin to unit testing in classic software development and is essential for building trustworthy, production-ready agents.

Developers often test AI systems with well-formed, correctly spelled questions. However, real users submit vague, typo-ridden, and ambiguous prompts. Directly analyzing these raw logs is the most crucial first step to understanding how your product fails in the real world and where to focus quality improvements.

Teams that claim to build AI on "vibes," like the Claude Code team, aren't ignoring evaluation. Their intense, expert-led dogfooding is a form of manual error analysis. Furthermore, their products are built on foundational models that have already undergone rigorous automated evaluations. The two approaches are part of the same quality spectrum, not opposites.

You don't need to create an automated "LLM as a judge" for every potential failure. Many issues discovered during error analysis can be fixed with a simple prompt adjustment. Reserve the effort of building robust, automated evals for the 4-7 most persistent and critical failure modes that prompt changes alone cannot solve.

Instead of seeking a "magical system" for AI quality, the most effective starting point is a manual process called error analysis. This involves spending a few hours reading through ~100 random user interactions, taking simple notes on failures, and then categorizing those notes to identify the most common problems.

Reviewing user interaction data is the highest ROI activity for improving an AI product. Instead of relying solely on third-party observability tools, high-performing teams build simple, custom internal applications. These tools are tailored to their specific data and workflow, removing all friction from the process of looking at and annotating traces.