Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

For tasks where a simple right/wrong answer doesn't exist, verification is a major challenge. The solution is creating detailed rubrics with thousands of criteria, often developed with AI help. This provides a granular reward signal that allows models to climb the learning curve even in highly subjective domains.

Related Insights

AI excels where success is quantifiable (e.g., code generation). Its greatest challenge lies in subjective domains like mental health or education. Progress requires a messy, societal conversation to define 'success,' not just a developer-built technical leaderboard.

The key to enabling an AI agent like Ralph to work autonomously isn't just a clever prompt, but a self-contained feedback loop. By providing clear, machine-verifiable "acceptance criteria" for each task, the agent can test its own work and confirm completion without requiring human intervention or subjective feedback.

The frontier of AI training is moving beyond humans ranking model outputs (RLHF). Now, high-skilled experts create detailed success criteria (like rubrics or unit tests), which an AI then uses to provide feedback to the main model at scale, a process called RLAIF.

While AI has mastered verifiable tasks with clear right answers, its future growth depends on human experts training models in subjective fields where 'good' is not easily defined. Companies are now sourcing professionals to act as 'verifiers' that teach AI nuanced, domain-specific judgment.

Unlike coding with its verifiable unit tests, complex legal work lacks a binary success metric. Harvey addresses this reinforcement learning challenge by treating senior partner feedback and edits as the "reward function," mirroring how quality is judged in the real world. The ultimate verification is long-term success, like a merger avoiding future litigation.

Data that measures success, like a grading rubric, is far more valuable for AI training than simple raw output. This 'second kind of data' enables iterative learning by allowing models to attempt a problem, receive a score, and learn from the feedback.

To teach AI subjective skills like poetry, a group of experts with some disagreement is better than one with full consensus. This approach captures diverse tastes and edge cases, which is more valuable for creating a robust model than achieving perfect agreement.

Building a functional AI agent is just the starting point. The real work lies in developing a set of evaluations ("evals") to test if the agent consistently behaves as expected. Without quantifying failures and successes against a standard, you're just guessing, not iteratively improving the agent's performance.

A one-size-fits-all evaluation method is inefficient. Use simple code for deterministic checks like word count. Leverage an LLM-as-a-judge for subjective qualities like tone. Reserve costly human evaluation for ambiguous cases flagged by the LLM or for validating new features.

To manage non-deterministic AI products, Shopify created an internal tool where PMs grade AI-generated outputs. This creates a "ground truth" dataset of what "good" looks like, which is then used to fine-tune a separate LLM that acts as an automated quality judge for new features and updates.