National tests in Sweden revealed human evaluators for oral exams were shockingly inconsistent, sometimes performing worse than random chance. While AI grading has its own biases, they can be identified and systematically adjusted, unlike hidden human subjectivity.
AI excels where success is quantifiable (e.g., code generation). Its greatest challenge lies in subjective domains like mental health or education. Progress requires a messy, societal conversation to define 'success,' not just a developer-built technical leaderboard.
The frontier of AI training is moving beyond humans ranking model outputs (RLHF). Now, high-skilled experts create detailed success criteria (like rubrics or unit tests), which an AI then uses to provide feedback to the main model at scale, a process called RLAIF.
When a lab report screenshot included a dismissive note about "hemolysis," both human doctors and a vision-enabled AI made the same mistake of ignoring a critical data point. This highlights how AI can inherit human biases embedded in data presentation, underscoring the need to test models with varied information formats.
When creating an "LLM as a judge" to automate evaluations, resist the urge to use a 1-5 rating scale. This creates ambiguity (what does a 3.2 vs 3.7 mean?). Instead, force the judge to make a binary "pass" or "fail" decision. It's a more painful but ultimately more tractable and actionable way to measure quality.
AI makes cheating easier, undermining grades as a motivator. More importantly, it enables continuous, nuanced assessment that renders one-off standardized tests obsolete. This forces a necessary shift from a grade-driven to a learning-driven education system.
Instead of policing AI use, a novel strategy is for teachers to show students what AI produces on an assignment and grade it as a 'B-'. This sets a clear baseline, reframing AI as a starting point and challenging students to use human creativity and critical thinking to achieve a higher grade.
A study found evaluators rated AI-generated research ideas as better than those from grad students. However, when the experiments were conducted, human ideas produced superior results. This highlights a bias where we may favor AI's articulate proposals over more substantively promising human intuition.
Generative AI's appeal highlights a systemic issue in education. When grades—impacting financial aid and job prospects—are tied solely to finished products, students rationally use tools that shortcut the learning process to achieve the desired outcome under immense pressure from other life stressors.
The benchmark for AI reliability isn't 100% perfection. It's simply being better than the inconsistent, error-prone humans it augments. Since human error is the root cause of most critical failures (like cyber breaches), this is an achievable and highly valuable standard.
AI models excel at specific tasks (like evals) because they are trained exhaustively on narrow datasets, akin to a student practicing 10,000 hours for a coding competition. While they become experts in that domain, they fail to develop the broader judgment and generalization skills needed for real-world success.