To make genuine scientific breakthroughs, an AI needs to learn the abstract reasoning strategies and mental models of expert scientists. This involves teaching it higher-level concepts, such as thinking in terms of symmetries, a core principle in physics that current models lack.
Current LLMs fail at science because they lack the ability to iterate. True scientific inquiry is a loop: form a hypothesis, conduct an experiment, analyze the result (even if incorrect), and refine. AI needs this same iterative capability with the real world to make genuine discoveries.
The ambitious goal of discovering a high-temperature superconductor isn't just a scientific target; it's a strategic choice. Achieving it requires building numerous sub-systems like autonomous synthesis and characterization, effectively forcing the creation of a general-purpose AI for science platform.
Instead of relying on digital proxies like code graders, Periodic Labs uses real-world lab experiments as the ultimate reward function. Nature itself becomes the reinforcement learning environment, ensuring the AI is optimized against physical reality, not flawed simulations.
While pursuing a long-term research goal, the company's commercial strategy is to build AI co-pilots and intelligence layers for R&D workflows in established industries like space and defense. This approach productizes intermediate progress and targets massive existing R&D budgets.
Foundation models can't be trained for physics using existing literature because the data is too noisy and lacks published negative results. A physical lab is needed to generate clean data and capture the learning signal from failed experiments, which is a core thesis for Periodic Labs.
In a field as complex as AI for science, even top experts know only a fraction of what's needed. Periodic Labs prioritizes intense curiosity and mission alignment over advanced degrees, recognizing that everyone, regardless of background, faces a steep learning curve to grasp the full picture.
Simply scaling models on internet data won't solve specialized problems like curing cancer or discovering materials. While scaling laws hold for in-domain tasks, the model must be optimized against the specific data distribution it needs to learn from—which for science, requires generating new experimental data.
