An attempt to teach AI 'scientific taste' using RLHF on hypotheses failed because human raters prioritized superficial qualities like tone and feasibility over a hypothesis's potential world-changing impact. This suggests a need for feedback tied to downstream outcomes, not just human preference.
Contrary to the belief that synthetic data will replace human annotation, the need for human feedback will grow. While synthetic data works for simple, factual tasks, it cannot handle complex, multi-step reasoning, cultural nuance, or multimodal inputs. This makes RLHF essential for at least the next decade.
In high-stakes fields like pharma, AI's ability to generate more ideas (e.g., drug targets) is less valuable than its ability to aid in decision-making. Physical constraints on experimentation mean you can't test everything. The real need is for tools that help humans evaluate, prioritize, and gain conviction on a few key bets.
The frontier of AI training is moving beyond humans ranking model outputs (RLHF). Now, high-skilled experts create detailed success criteria (like rubrics or unit tests), which an AI then uses to provide feedback to the main model at scale, a process called RLAIF.
Reinforcement Learning with Human Feedback (RLHF) is a popular term, but it's just one method. The core concept is reinforcing desired model behavior using various signals. These can include AI feedback (RLAIF), where another AI judges the output, or verifiable rewards, like checking if a model's answer to a math problem is correct.
Focusing on which reinforcement learning algorithm is best (e.g., PPO vs. DPO) is misguided. The more critical factor is the quality and verifiability of the input data signal itself, which exists on a spectrum from subjective human preference (RLHF) to objective, verifiable truth.
In an experiment testing AI-generated hypotheses for macular degeneration, the hypothesis that succeeded in lab tests was not the one ranked highest by ophthalmologists. This suggests expert intuition is an unreliable predictor of success compared to systematic, AI-driven exploration and verification.
A study found evaluators rated AI-generated research ideas as better than those from grad students. However, when the experiments were conducted, human ideas produced superior results. This highlights a bias where we may favor AI's articulate proposals over more substantively promising human intuition.
A major frontier for AI in science is developing 'taste'—the human ability to discern not just if a research question is solvable, but if it is genuinely interesting and impactful. Models currently struggle to differentiate an exciting result from a boring one.
National tests in Sweden revealed human evaluators for oral exams were shockingly inconsistent, sometimes performing worse than random chance. While AI grading has its own biases, they can be identified and systematically adjusted, unlike hidden human subjectivity.
The best AI models are trained on data that reflects deep, subjective qualities—not just simple criteria. This "taste" is a key differentiator, influencing everything from code generation to creative writing, and is shaped by the values of the frontier lab.