We scan new podcasts and send you the top 5 insights daily.
To avoid saturated evaluations that only confirm existing capabilities, Notion's team creates difficult test suites they expect to fail 70% of the time. This "headroom" provides a clear signal to model providers about frontier needs and helps the team anticipate where the technology is heading.
Don't treat evals as a mere checklist. Instead, use them as a creative tool to discover opportunities. A well-designed eval can reveal that a product is underperforming for a specific user segment, pointing directly to areas for high-impact improvement that a simple "vibe check" would miss.
While evals involve testing, their purpose isn't just to report bugs (information), like traditional QA. For an AI PM, evals are a core tool to actively shape and improve the product's behavior and performance (transformation) by iteratively refining prompts, models, and orchestration layers.
As benchmarks become standard, AI labs optimize models to excel at them, leading to score inflation without necessarily improving generalized intelligence. The solution isn't a single perfect test, but continuously creating new evals that measure capabilities relevant to real-world user needs.
In a new technological wave like AI, a high project failure rate is desirable. It indicates that a company is aggressively experimenting and pushing boundaries to discover what provides real value, rather than being too conservative.
The primary bottleneck in improving AI is no longer data or compute, but the creation of 'evals'—tests that measure a model's capabilities. These evals act as product requirement documents (PRDs) for researchers, defining what success looks like and guiding the training process.
Notion treats its entire evaluation process as a coding agent problem. The system is designed for an agent to download a dataset, run an eval, identify a failure, debug the issue, and implement a fix, all within an automated loop. This turns quality assurance into a meta-problem for agents to solve.
Building a functional AI agent is just the starting point. The real work lies in developing a set of evaluations ("evals") to test if the agent consistently behaves as expected. Without quantifying failures and successes against a standard, you're just guessing, not iteratively improving the agent's performance.
If all your evals pass, you don't know the current limits of your system. Evals that consistently fail act as a benchmark. When a new foundation model is released, rerunning these tests immediately reveals if it has overcome previous limitations.
The rapid improvement of AI models is maxing out industry-standard benchmarks for tasks like software engineering. To truly understand AI's impact and capability, companies must develop their own evaluation systems tailored to their specific workflows, rather than waiting for external studies.
To stay on the cutting edge, maintain a list of complex tasks that current AI models can't perform well. Whenever a new model is released, run it against this suite. This practice provides an intuitive feel for the model's leap in capability and helps you identify when a previously impossible workflow becomes feasible.