Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Prof. Cho outlines two competing visions for world models. One camp believes in high-fidelity, step-by-step prediction (e.g., video generation). The other, which he and Yann LeCun favor, argues for abstract, high-level latent models that can plan without simulating every detail, akin to human thinking.

Related Insights

While language models understand the world through text, Demis Hassabis argues they lack an intuitive grasp of physics and spatial dynamics. He sees 'world models'—simulations that understand cause and effect in the physical world—as the critical technology needed to advance AI from digital tasks to effective robotics.

Human understanding is the ability to connect new information to a global, unified model of the universe. Until recently, AI models were isolated (e.g., a chess model). The major advance with large multimodal models is their ability to create a single, cohesive reality model, enabling true, generalizable understanding.

Google's Project Genie, which generates interactive virtual worlds from prompts, is not just a gaming or media tool. It's a foundational part of Google DeepMind's strategy to achieve AGI by creating simulated environments where AI can learn about physics, actions, and consequences.

Language is just one 'keyhole' into intelligence. True artificial general intelligence (AGI) requires 'world modeling'—a spatial intelligence that understands geometry, physics, and actions. This capability to represent and interact with the state of the world is the next critical phase of AI development beyond current language models.

Startups and major labs are focusing on "world models," which simulate physical reality, cause, and effect. This is seen as the necessary step beyond text-based LLMs to create agents that can truly understand and interact with the physical world, a key step towards AGI.

Large language models are insufficient for tasks requiring real-world interaction and spatial understanding, like robotics or disaster response. World models provide this missing piece by generating interactive, reason-able 3D environments. They represent a foundational shift from language-based AI to a more holistic, spatially intelligent AI.

To create persistent and interactive AI-generated worlds, Moon Lake uses a hybrid approach. It encodes deterministic rules and interactivity using symbolic representations like code, while leveraging pixel-based models only for the world's visual appearance. This allows for long-horizon memory and complex game mechanics that pixel-only models struggle with.

The AI's ability to handle novel situations isn't just an emergent property of scale. Waive actively trains "world models," which are internal generative simulators. This enables the AI to reason about what might happen next, leading to sophisticated behaviors like nudging into intersections or slowing in fog.

Meta's chief AI scientist, Yann LeCun, is reportedly leaving to start a company focused on "world models"—AI that learns from video and spatial data to understand cause-and-effect. He argues the industry's focus on LLMs is a dead end and that his alternative approach will become dominant within five years.

Demis Hassabis sees video generation as more than a content tool; it's a step toward building AI with "world models." By learning to generate realistic scenes, these models develop an intuitive understanding of physics and causality, a foundational capability for AGI to perform long-term planning in the real world.