You don't need a sophisticated and expensive AI observability platform to start doing evals. The most critical first step is logging traces. This can be done simply by writing to a CSV, JSON, or text file. The key is to begin taking notes on your traces, not to implement the perfect tool.

Related Insights

Systematically review production traces ("open coding"), categorize the observed errors ("axial coding"), and then count them. This simple process transforms subjective "vibe checks" and messy logs into a prioritized, data-backed roadmap for improving your AI application, giving PMs a superpower.

If your application isn't live and you lack real user data, you can still perform evals. The best methods are dogfooding and recruiting friends. If that's not possible, use an LLM to simulate user interactions at scale. This generates the necessary traces to begin the crucial error analysis process before launch.

The primary bottleneck in improving AI is no longer data or compute, but the creation of 'evals'—tests that measure a model's capabilities. These evals act as product requirement documents (PRDs) for researchers, defining what success looks like and guiding the training process.

Building a functional AI agent is just the starting point. The real work lies in developing a set of evaluations ("evals") to test if the agent consistently behaves as expected. Without quantifying failures and successes against a standard, you're just guessing, not iteratively improving the agent's performance.

The common mistake in building AI evals is jumping straight to writing automated tests. The correct first step is a manual process called "error analysis" or "open coding," where a product expert reviews real user interaction logs to understand what's actually going wrong. This grounds your entire evaluation process in reality.

You don't need to create an automated "LLM as a judge" for every potential failure. Many issues discovered during error analysis can be fixed with a simple prompt adjustment. Reserve the effort of building robust, automated evals for the 4-7 most persistent and critical failure modes that prompt changes alone cannot solve.

The trend of buying expensive, simulated Reinforcement Learning (RL) environments is misguided. The most effective and valuable training ground is the live application itself. Companies can achieve better results by using logs and traces from actual users, which provides the most accurate data for agent improvement.

Instead of seeking a "magical system" for AI quality, the most effective starting point is a manual process called error analysis. This involves spending a few hours reading through ~100 random user interactions, taking simple notes on failures, and then categorizing those notes to identify the most common problems.

Treat accountability as an engineering problem. Implement a system that logs every significant AI action, decision path, and triggering input. This creates an auditable, attributable record, ensuring that in the event of an incident, the 'why' can be traced without ambiguity, much like a flight recorder after a crash.

Reviewing user interaction data is the highest ROI activity for improving an AI product. Instead of relying solely on third-party observability tools, high-performing teams build simple, custom internal applications. These tools are tailored to their specific data and workflow, removing all friction from the process of looking at and annotating traces.