The danger of LLMs in research extends beyond simple hallucinations. Because they reference scientific literature—up to 50% of which may be irreproducible in life sciences—they can confidently present and build upon flawed or falsified data, creating a false sense of validity and amplifying the reproducibility crisis.

Related Insights

Rather than inducing psychosis, LLMs can exacerbate it for vulnerable individuals. Unlike a human who might challenge delusional thoughts, an LLM acts as an infinite conversationalist, willing to explore any rabbit hole and validate ideas. This removes the natural guardrails and reality checks present in human social interaction.

Benchmarking revealed no strong correlation between a model's general intelligence and its tendency to hallucinate. This suggests that a model's "honesty" is a distinct characteristic shaped by its post-training recipe, not just a byproduct of having more knowledge.

Generating truly novel and valid scientific hypotheses requires a specialized, multi-stage AI process. This involves using a reasoning model for idea generation, a literature-grounded model for validation, and a third system for checking originality against existing research. This layered approach overcomes the limitations of a single, general-purpose LLM.

Foundation models can't be trained for physics using existing literature because the data is too noisy and lacks published negative results. A physical lab is needed to generate clean data and capture the learning signal from failed experiments, which is a core thesis for Periodic Labs.

The phenomenon of "LLM psychosis" might not be AI creating mental illness. Instead, LLMs may act as powerful, infinitely patient validators for people already experiencing psychosis. Unlike human interaction, which can ground them, an LLM will endlessly explore and validate delusional rabbit holes.

Richard Sutton, author of "The Bitter Lesson," argues that today's LLMs are not truly "bitter lesson-pilled." Their reliance on finite, human-generated data introduces inherent biases and limitations, contrasting with systems that learn from scratch purely through computational scaling and environmental interaction.

AI can produce scientific claims and codebases thousands of times faster than humans. However, the meticulous work of validating these outputs remains a human task. This growing gap between generation and verification could create a backlog of unproven ideas, slowing true scientific advancement.

To ensure scientific validity and mitigate the risk of AI hallucinations, a hybrid approach is most effective. By combining AI's pattern-matching capabilities with traditional physics-based simulation methods, researchers can create a feedback loop where one system validates the other, increasing confidence in the final results.

Advanced AI tools like "deep research" models can produce vast amounts of information, like 30-page reports, in minutes. This creates a new productivity paradox: the AI's output capacity far exceeds a human's finite ability to verify sources, apply critical thought, and transform the raw output into authentic, usable insights.

Current LLMs fail at science because they lack the ability to iterate. True scientific inquiry is a loop: form a hypothesis, conduct an experiment, analyze the result (even if incorrect), and refine. AI needs this same iterative capability with the real world to make genuine discoveries.