In high-stakes fields like pharma, AI's ability to generate more ideas (e.g., drug targets) is less valuable than its ability to aid in decision-making. Physical constraints on experimentation mean you can't test everything. The real need is for tools that help humans evaluate, prioritize, and gain conviction on a few key bets.

Related Insights

Wet lab experiments are slow and expensive, forcing scientists to pursue safer, incremental hypotheses. AI models can computationally test riskier, 'home run' ideas before committing lab resources. This de-risking makes scientists less hesitant to explore breakthrough concepts that could accelerate the field.

In a direct comparison, a medicinal chemist was better than an AI model at evaluating the synthesizability of 30,000 compounds. The chemist's intuitive, "liability-spotting" approach highlights the continued value of expert human judgment and the need for human-in-the-loop AI systems.

Building an AI application is becoming trivial and fast ("under 10 minutes"). The true differentiator and the most difficult part is embedding deep domain knowledge into the prompts. The AI needs to be taught *what* to look for, which requires human expertise in that specific field.

While AI can accelerate the ideation phase of drug discovery, the primary bottleneck remains the slow, expensive, and human-dependent clinical trial process. We are already "drowning in good ideas," so generating more with AI doesn't solve the fundamental constraint of testing them.

AI can produce scientific claims and codebases thousands of times faster than humans. However, the meticulous work of validating these outputs remains a human task. This growing gap between generation and verification could create a backlog of unproven ideas, slowing true scientific advancement.

AI can generate hundreds of statistically novel ideas in seconds, but they lack context and feasibility. The bottleneck isn't a lack of ideas, but a lack of *good* ideas. Humans excel at filtering this volume through the lens of experience and strategic value, steering raw output toward a genuinely useful solution.

AI will create jobs in unexpected places. As AI accelerates the discovery of new drugs and medical treatments, the bottleneck will shift to human-centric validation. This will lead to significant job growth in the biomedical sector, particularly in roles related to managing and conducting clinical trials.

The bottleneck for AI in drug development isn't the sophistication of the models but the absence of large-scale, high-quality biological data sets. Without comprehensive data on how drugs interact within complex human systems, even the best AI models cannot make accurate predictions.

Advanced AI tools like "deep research" models can produce vast amounts of information, like 30-page reports, in minutes. This creates a new productivity paradox: the AI's output capacity far exceeds a human's finite ability to verify sources, apply critical thought, and transform the raw output into authentic, usable insights.

Current LLMs fail at science because they lack the ability to iterate. True scientific inquiry is a loop: form a hypothesis, conduct an experiment, analyze the result (even if incorrect), and refine. AI needs this same iterative capability with the real world to make genuine discoveries.

For Scientific AI, The Bottleneck Is Human Judgment, Not Idea Generation | RiffOn