Quantifying the "goodness" of an AI-generated summary is analogous to measuring the impact of a peacebuilding initiative. Both require moving beyond simple quantitative data (clicks, meetings held) to define and measure complex, ineffable outcomes by focusing on the qualitative "so what."

Related Insights

Generic evaluation metrics like "helpfulness" or "conciseness" are vague and untrustworthy. A better approach is to first perform manual error analysis to find recurring problems (e.g., "tour scheduling failures"). Then, build specific, targeted evaluations (evals) that directly measure the frequency of these concrete issues, making metrics meaningful.

In the age of AI, the new standard for value is the "GPT Test." If a person's public statements, writing, or ideas could have been generated by a large language model, they will fail to stand out. This places an immense premium on true originality, deep insight, and an authentic voice—the very things AI struggles to replicate.

Users mistakenly evaluate AI tools based on the quality of the first output. However, since 90% of the work is iterative, the superior tool is the one that handles a high volume of refinement prompts most effectively, not the one with the best initial result.

A key metric for AI coding agent performance is real-time sentiment analysis of user prompts. By measuring whether users say 'fantastic job' or 'this is not what I wanted,' teams get an immediate signal of the agent's comprehension and effectiveness, which is more telling than lagging indicators like bug counts.

To evaluate AI's role in building relationships, marketers must look beyond transactional KPIs. Leading indicators of success include sustained engagement, customers volunteering more information, and recommending the experience to others. These metrics quantify brand trust and empathy—proving the brand is earning belief, not just attention.

The primary bottleneck in improving AI is no longer data or compute, but the creation of 'evals'—tests that measure a model's capabilities. These evals act as product requirement documents (PRDs) for researchers, defining what success looks like and guiding the training process.

Open and click rates are ineffective for measuring AI-driven, two-way conversations. Instead, leaders should adopt new KPIs: outcome metrics (e.g., meetings booked), conversational quality (tracking an agent's 'I don't know' rate to measure trust), and, ultimately, customer lifetime value.

As models mature, their core differentiator will become their underlying personality and values, shaped by their creators' objective functions. One model might optimize for user productivity by being concise, while another optimizes for engagement by being verbose.

The best AI models are trained on data that reflects deep, subjective qualities—not just simple criteria. This "taste" is a key differentiator, influencing everything from code generation to creative writing, and is shaped by the values of the frontier lab.

AI's growth is hampered by a measurement problem, much like early digital advertising. The industry's acceleration won't come from better AI models alone, but from building a 'boring' infrastructure, like Comscore did for ads, to prove the tools actually work.