Periodic Labs' co-founder states their work was not possible with the AI of late 2022. Advances in model reasoning, reliable tool use, and error correction over the subsequent years were foundational technologies necessary to connect AI systems to the physical world.
When productionizing GPT-4, OpenAI considered specific applications like writing or coding bots. The now-famous chatbot was chosen not because it was the most obvious idea, but because of leadership's opinionated stance to keep the product general purpose.
Progress towards AGI is not a smooth climb. Models exhibit "spikiness"—they can perform at a world-class level on one narrow domain but degrade to a "bad high school student" with slight perturbations. This non-intuitive generalization makes their capabilities uneven and unpredictable.
Many physicists transitioned to AI seeking a new frontier after the Higgs boson's discovery. The field became bottlenecked by the need for new, expensive apparatus, making AI's high leverage with computer science an attractive alternative for career impact.
Unlike language models trained on the internet, AI for materials science overcomes data scarcity and unreliability (e.g., conflicting literature) with a closed loop. The system actively directs experiments, analyzes grounded results for patterns, and uses that new data to drive the next cycle.
Periodic Labs doesn't use a single monolithic model. Instead, a powerful language model acts as a central coordinator or "copilot." It directs experiments by calling upon smaller, highly specialized, and more efficient neural nets (e.g., those with symmetry awareness for atomic systems) as tools.
The path to AI self-improvement isn't uniform. It is happening first in software engineering and AI research because these fields have cheap, fast, and verifiable feedback (e.g., unit tests). This capability won't automatically transfer to domains like biology until similar closed-loop systems are built.
