In an experiment testing AI-generated hypotheses for macular degeneration, the hypothesis that succeeded in lab tests was not the one ranked highest by ophthalmologists. This suggests expert intuition is an unreliable predictor of success compared to systematic, AI-driven exploration and verification.
Generating truly novel and valid scientific hypotheses requires a specialized, multi-stage AI process. This involves using a reasoning model for idea generation, a literature-grounded model for validation, and a third system for checking originality against existing research. This layered approach overcomes the limitations of a single, general-purpose LLM.
An attempt to teach AI 'scientific taste' using RLHF on hypotheses failed because human raters prioritized superficial qualities like tone and feasibility over a hypothesis's potential world-changing impact. This suggests a need for feedback tied to downstream outcomes, not just human preference.
In high-stakes fields like pharma, AI's ability to generate more ideas (e.g., drug targets) is less valuable than its ability to aid in decision-making. Physical constraints on experimentation mean you can't test everything. The real need is for tools that help humans evaluate, prioritize, and gain conviction on a few key bets.
AI performs poorly in areas where expertise is based on unwritten 'taste' or intuition rather than documented knowledge. If the correct approach doesn't exist in training data or isn't explicitly provided by human trainers, models will inevitably struggle with that particular problem.
In a direct comparison, a medicinal chemist was better than an AI model at evaluating the synthesizability of 30,000 compounds. The chemist's intuitive, "liability-spotting" approach highlights the continued value of expert human judgment and the need for human-in-the-loop AI systems.
AlphaFold's success in identifying a key protein for human fertilization (out of 2,000 possibilities) showcases AI's power. It acts as a hypothesis generator, dramatically reducing the search space for expensive and time-consuming real-world experiments.
The ultimate goal isn't just modeling specific systems (like protein folding), but automating the entire scientific method. This involves AI generating hypotheses, choosing experiments, analyzing results, and updating a 'world model' of a domain, creating a continuous loop of discovery.
A study found evaluators rated AI-generated research ideas as better than those from grad students. However, when the experiments were conducted, human ideas produced superior results. This highlights a bias where we may favor AI's articulate proposals over more substantively promising human intuition.
Current LLMs fail at science because they lack the ability to iterate. True scientific inquiry is a loop: form a hypothesis, conduct an experiment, analyze the result (even if incorrect), and refine. AI needs this same iterative capability with the real world to make genuine discoveries.
AI's key advantage isn't superior intelligence but the ability to brute-force enumerate and then rapidly filter a vast number of hypotheses against existing literature and data. This systematic, high-volume approach uncovers novel insights that intuition-driven human processes might miss.