In a direct comparison, a medicinal chemist was better than an AI model at evaluating the synthesizability of 30,000 compounds. The chemist's intuitive, "liability-spotting" approach highlights the continued value of expert human judgment and the need for human-in-the-loop AI systems.

Related Insights

To ensure accuracy in its legal AI, LexisNexis unexpectedly hired a large number of lawyers, not just data scientists. These legal experts are crucial for reviewing AI output, identifying errors, and training the models, highlighting the essential role of human domain expertise in specialized AI.

Instead of waiting for AI models to be perfect, design your application from the start to allow for human correction. This pragmatic approach acknowledges AI's inherent uncertainty and allows you to deliver value sooner by leveraging human oversight to handle edge cases.

Don't ask an LLM to perform initial error analysis; it lacks the product context to spot subtle failures. Instead, have a human expert write detailed, freeform notes ("open codes"). Then, leverage an LLM's strength in synthesis to automatically categorize those hundreds of human-written notes into actionable failure themes ("axial codes").

True creative mastery emerges from an unpredictable human process. AI can generate options quickly but bypasses this journey, losing the potential for inexplicable, last-minute genius that defines truly great work. It optimizes for speed at the cost of brilliance.

AI's unpredictability requires more than just better models. Product teams must work with researchers on training data and specific evaluations for sensitive content. Simultaneously, the UI must clearly differentiate between original and AI-generated content to facilitate effective human oversight.

AI evaluation shouldn't be confined to engineering silos. Subject matter experts (SMEs) and business users hold the critical domain knowledge to assess what's "good." Providing them with GUI-based tools, like an "eval studio," is crucial for continuous improvement and building trustworthy enterprise AI.

AI can produce scientific claims and codebases thousands of times faster than humans. However, the meticulous work of validating these outputs remains a human task. This growing gap between generation and verification could create a backlog of unproven ideas, slowing true scientific advancement.

Unlike deterministic SaaS software that works consistently, AI is probabilistic and doesn't work perfectly out of the box. Achieving 'human-grade' performance (e.g., 99.9% reliability) requires continuous tuning and expert guidance, countering the hype that AI is an immediate, hands-off solution.

AI can generate hundreds of statistically novel ideas in seconds, but they lack context and feasibility. The bottleneck isn't a lack of ideas, but a lack of *good* ideas. Humans excel at filtering this volume through the lens of experience and strategic value, steering raw output toward a genuinely useful solution.

A study found evaluators rated AI-generated research ideas as better than those from grad students. However, when the experiments were conducted, human ideas produced superior results. This highlights a bias where we may favor AI's articulate proposals over more substantively promising human intuition.