To teach AI subjective skills like poetry, a group of experts with some disagreement is better than one with full consensus. This approach captures diverse tastes and edge cases, which is more valuable for creating a robust model than achieving perfect agreement.
AI excels where success is quantifiable (e.g., code generation). Its greatest challenge lies in subjective domains like mental health or education. Progress requires a messy, societal conversation to define 'success,' not just a developer-built technical leaderboard.
A fascinating meta-learning loop emerged where an LLM provides real-time 'quality checks' to human subject-matter experts. This helps them learn the novel skill of how to effectively teach and 'stump' another AI, bridging the gap between their domain expertise and the mechanics of model training.
The term "data labeling" minimizes the complexity of AI training. A better analogy is "raising a child," as the process involves teaching values, creativity, and nuanced judgment. This reframe highlights the deep responsibility of shaping the "objective functions" for future AI.
AI performs poorly in areas where expertise is based on unwritten 'taste' or intuition rather than documented knowledge. If the correct approach doesn't exist in training data or isn't explicitly provided by human trainers, models will inevitably struggle with that particular problem.
The frontier of AI training is moving beyond humans ranking model outputs (RLHF). Now, high-skilled experts create detailed success criteria (like rubrics or unit tests), which an AI then uses to provide feedback to the main model at scale, a process called RLAIF.
Data that measures success, like a grading rubric, is far more valuable for AI training than simple raw output. This 'second kind of data' enables iterative learning by allowing models to attempt a problem, receive a score, and learn from the feedback.
National tests in Sweden revealed human evaluators for oral exams were shockingly inconsistent, sometimes performing worse than random chance. While AI grading has its own biases, they can be identified and systematically adjusted, unlike hidden human subjectivity.
To codify a specific person's "taste" in writing, the team fed the DSPy framework a dataset of tweets with thumbs up/down ratings and explanations. DSPy then optimized a prompt that created an AI "judge" capable of evaluating new content with 76.5% accuracy against that person's preferences.
The best AI models are trained on data that reflects deep, subjective qualities—not just simple criteria. This "taste" is a key differentiator, influencing everything from code generation to creative writing, and is shaped by the values of the frontier lab.
For subjective outputs like image aesthetics and face consistency, quantitative metrics are misleading. Google's team relies heavily on disciplined human evaluations, internal 'eyeballing,' and community testing to capture the subtle, emotional impact that benchmarks can't quantify.