Generating truly novel and valid scientific hypotheses requires a specialized, multi-stage AI process. This involves using a reasoning model for idea generation, a literature-grounded model for validation, and a third system for checking originality against existing research. This layered approach overcomes the limitations of a single, general-purpose LLM.
LLMs shine when acting as a 'knowledge extruder'—shaping well-documented, 'in-distribution' concepts into specific code. They fail when the core task is novel problem-solving where deep thinking, not code generation, is the bottleneck. In these cases, the code is the easy part.
The significant leap in LLMs isn't just better text generation, but their ability to autonomously execute complex, sequential tasks. This 'agentic behavior' allows them to handle multi-step processes like scientific validation workflows, a capability earlier models lacked, moving them beyond single-command execution.
Google is moving beyond AI as a mere analysis tool. The concept of an 'AI co-scientist' envisions AI as an active partner that helps sift through information, generate novel hypotheses, and outline ways to test them. This reframes the human-AI collaboration to fundamentally accelerate the scientific method itself.
Demis Hassabis argues against an LLM-only path to AGI, citing DeepMind's successes like AlphaGo and AlphaFold as evidence. He advocates for "hybrid systems" (or neurosymbolics) that combine neural networks with other techniques like search or evolutionary methods to discover truly new knowledge, not just remix existing data.
In high-stakes fields like pharma, AI's ability to generate more ideas (e.g., drug targets) is less valuable than its ability to aid in decision-making. Physical constraints on experimentation mean you can't test everything. The real need is for tools that help humans evaluate, prioritize, and gain conviction on a few key bets.
AI can produce scientific claims and codebases thousands of times faster than humans. However, the meticulous work of validating these outputs remains a human task. This growing gap between generation and verification could create a backlog of unproven ideas, slowing true scientific advancement.
To make genuine scientific breakthroughs, an AI needs to learn the abstract reasoning strategies and mental models of expert scientists. This involves teaching it higher-level concepts, such as thinking in terms of symmetries, a core principle in physics that current models lack.
AI can generate hundreds of statistically novel ideas in seconds, but they lack context and feasibility. The bottleneck isn't a lack of ideas, but a lack of *good* ideas. Humans excel at filtering this volume through the lens of experience and strategic value, steering raw output toward a genuinely useful solution.
Current LLMs fail at science because they lack the ability to iterate. True scientific inquiry is a loop: form a hypothesis, conduct an experiment, analyze the result (even if incorrect), and refine. AI needs this same iterative capability with the real world to make genuine discoveries.
The next leap in AI will come from integrating general-purpose reasoning models with specialized models for domains like biology or robotics. This fusion, creating a "single unified intelligence" across modalities, is the base case for achieving superintelligence.