Before building an AI agent, product managers must first create an evaluation set and scorecard. This 'eval-driven development' approach is critical for measuring whether training is improving the model and aligning its progress with the product vision. Without it, you cannot objectively demonstrate progress.
Don't treat evals as a mere checklist. Instead, use them as a creative tool to discover opportunities. A well-designed eval can reveal that a product is underperforming for a specific user segment, pointing directly to areas for high-impact improvement that a simple "vibe check" would miss.
The main obstacle to deploying enterprise AI isn't just technical; it's achieving organizational alignment on a quantifiable definition of success. Creating a comprehensive evaluation suite is crucial before building, as no single person typically knows all the right answers.
Treating AI evaluation like a final exam is a mistake. For critical enterprise systems, evaluations should be embedded at every step of an agent's workflow (e.g., after planning, before action). This is akin to unit testing in classic software development and is essential for building trustworthy, production-ready agents.
In AI, low prototyping costs and customer uncertainty make the traditional research-first PM model obsolete. The new approach is to build a prototype quickly, show it to customers to discover possibilities, and then iterate based on their reactions, effectively building the solution before the problem is fully defined.
The primary bottleneck in improving AI is no longer data or compute, but the creation of 'evals'—tests that measure a model's capabilities. These evals act as product requirement documents (PRDs) for researchers, defining what success looks like and guiding the training process.
Building a functional AI agent is just the starting point. The real work lies in developing a set of evaluations ("evals") to test if the agent consistently behaves as expected. Without quantifying failures and successes against a standard, you're just guessing, not iteratively improving the agent's performance.
Because PMs deeply understand the customer's job, needs, and alternatives, they are the only ones qualified to write the evaluation criteria for what a successful AI output looks like. This critical task goes beyond technical metrics and is core to the PM's role in the AI era.
The prompts for your "LLM as a judge" evals function as a new form of PRD. They explicitly define the desired behavior, edge cases, and quality standards for your AI agent. Unlike static PRDs, these are living documents, derived from real user data and are constantly, automatically testing if the product meets its requirements.
In traditional product management, data was for analysis. In AI, data *is* the product. PMs must now deeply understand data pipelines, data health, and the critical feedback loop where model outputs are used to retrain and improve the product itself, a new core competency.
While useful for catching regressions like a unit test, directly optimizing for an eval benchmark is misleading. Evals are, by definition, a lagging proxy for the real-world user experience. Over-optimizing for a metric can lead to gaming it and degrading the actual product.