Demis Hassabis notes that while generative AI can create visually realistic worlds, their underlying physics are mere approximations. They look correct casually but fail rigorous tests. This gap between plausible and accurate physics is a key challenge that must be solved before these models can be reliably used for robotics training.

Related Insights

While language models understand the world through text, Demis Hassabis argues they lack an intuitive grasp of physics and spatial dynamics. He sees 'world models'—simulations that understand cause and effect in the physical world—as the critical technology needed to advance AI from digital tasks to effective robotics.

Sora 2's most significant advancement is not its visual quality, but its ability to understand and simulate physics. The model accurately portrays how water splashes or vehicles kick up snow, demonstrating a grasp of cause and effect crucial for true world-building.

Startups and major labs are focusing on "world models," which simulate physical reality, cause, and effect. This is seen as the necessary step beyond text-based LLMs to create agents that can truly understand and interact with the physical world, a key step towards AGI.

GI discovered their world model, trained on game footage, could generate a realistic camera shake during an in-game explosion—a physical effect not part of the game's engine. This suggests the models are learning an implicit understanding of real-world physics and can generate plausible phenomena that go beyond their source material.

Instead of replacing entire systems with AI "world models," a superior approach is a hybrid model. Classical code should handle deterministic logic (like game physics), while AI provides a "differentiable" emergent layer for aesthetics and creativity (like real-time texturing). This leverages the unique strengths of both computational paradigms.

While a world model can generate a physically plausible arch, it doesn't understand the underlying physics of force distribution. This gap between pattern matching and causal reasoning is a fundamental split between AI and human intelligence, making current models unsuitable for mission-critical applications like architecture.

Instead of simulating photorealistic worlds, robotics firm Flexion trains its models on simplified, abstract representations. For example, it uses perception models like Segment Anything to 'paint' a door red and its handle green. By training on this simplified abstraction, the robot learns the core task (opening doors) in a way that generalizes across all real-world doors, bypassing the need for perfect simulation.

A key advancement in Sora 2 is its failure mode. When a generated agent fails (e.g., a basketball player), the model simulates a physically plausible outcome (the ball bouncing off the rim) rather than forcing an unrealistic success. This shows a deeper, more robust internal world model.

Current AI world models suffer from compounding errors in long-term planning, where small inaccuracies become catastrophic over many steps. Demis Hassabis suggests hierarchical planning—operating at different levels of temporal abstraction—is a promising solution to mitigate this issue by reducing the number of sequential steps.

The "bitter lesson" (scale and simple models win) works for language because training data (text) aligns with the output (text). Robotics faces a critical misalignment: it's trained on passive web videos but needs to output physical actions in a 3D world. This data gap is a fundamental hurdle that pure scaling cannot solve.