We scan new podcasts and send you the top 5 insights daily.
Information scientist Don Swanson showed novel discoveries lie hidden in existing literature. If one paper shows A implies B and another shows B implies C, a new link (A implies C) can be found. AI can now scale this process of recombining old knowledge.
AIs excel at exploring millions of problems at a surface level (breadth), a scale humans cannot match. Human experts provide the depth needed to tackle the difficult "islands" AIs identify. Science must shift from its current depth-focused model to one that first uses AI to map entire fields and clear away low-hanging fruit.
The true power of AI for knowledge work is formulating unique prompts derived from obscure or cross-disciplinary knowledge. This allows users to extract novel ideas that standard queries miss, making deep, non-mainstream reading a key competitive advantage in the AI era.
Generating truly novel and valid scientific hypotheses requires a specialized, multi-stage AI process. This involves using a reasoning model for idea generation, a literature-grounded model for validation, and a third system for checking originality against existing research. This layered approach overcomes the limitations of a single, general-purpose LLM.
An AI tool can map citation or patent networks to find unexplored "blank spots" bordered by heavy research activity. These gaps represent high-potential opportunities for superstar papers or valuable patents, as any discovery there will connect and influence many adjacent fields.
Early AI models advanced by scraping web text and code. The next revolution, especially in "AI for science," requires overcoming a major hurdle: consolidating and formatting the world's vast but fragmented scientific data across disciplines like chemistry and materials science for model training.
The most effective way to use AI is not for initial research but for synthesis. After you've gathered and vetted high-quality sources, feed them to an AI to identify common themes, find gaps, and pinpoint outliers. This dramatically speeds up analysis without sacrificing quality.
A key value of AI agents is rediscovering "lost" institutional knowledge. By analyzing historical experimental data, agents can prevent redundant work. For example, an agent found a previous study on mouse models that saved a company eight months and significant cost, surfacing data from an acquired company where the original scientists were gone.
The ultimate goal isn't just modeling specific systems (like protein folding), but automating the entire scientific method. This involves AI generating hypotheses, choosing experiments, analyzing results, and updating a 'world model' of a domain, creating a continuous loop of discovery.
Cohere's CEO believes if Google had hidden the Transformer paper, another team would have created it within 18 months. Key ideas were already circulating in the research community, making the discovery a matter of synthesis whose time had come, rather than a singular stroke of genius.
AI's key advantage isn't superior intelligence but the ability to brute-force enumerate and then rapidly filter a vast number of hypotheses against existing literature and data. This systematic, high-volume approach uncovers novel insights that intuition-driven human processes might miss.