We scan new podcasts and send you the top 5 insights daily.
Cuban believes today's LLMs, trained on text and images, are a limited step. The next leap will be "worldview" models trained on the fundamental physics of the real world, using data from video and sensors to understand cause and effect, not just language patterns.
The next major leap in AI may come from "world models," which aim to give LLMs an experiential, physical understanding of concepts like space and physics. This mirrors the difference between knowing facts from a book and having real-world experience.
While language models understand the world through text, Demis Hassabis argues they lack an intuitive grasp of physics and spatial dynamics. He sees 'world models'—simulations that understand cause and effect in the physical world—as the critical technology needed to advance AI from digital tasks to effective robotics.
Human understanding is the ability to connect new information to a global, unified model of the universe. Until recently, AI models were isolated (e.g., a chess model). The major advance with large multimodal models is their ability to create a single, cohesive reality model, enabling true, generalizable understanding.
Language is just one 'keyhole' into intelligence. True artificial general intelligence (AGI) requires 'world modeling'—a spatial intelligence that understands geometry, physics, and actions. This capability to represent and interact with the state of the world is the next critical phase of AI development beyond current language models.
Startups and major labs are focusing on "world models," which simulate physical reality, cause, and effect. This is seen as the necessary step beyond text-based LLMs to create agents that can truly understand and interact with the physical world, a key step towards AGI.
Large Language Models are limited because they lack an understanding of the physical world. The next evolution is 'World Models'—AI trained on real-world sensory data to understand physics, space, and context. This is the foundational technology required to unlock physical AI like advanced robotics.
Meta's chief AI scientist, Yann LeCun, is reportedly leaving to start a company focused on "world models"—AI that learns from video and spatial data to understand cause-and-effect. He argues the industry's focus on LLMs is a dead end and that his alternative approach will become dominant within five years.
Demis Hassabis sees video generation as more than a content tool; it's a step toward building AI with "world models." By learning to generate realistic scenes, these models develop an intuitive understanding of physics and causality, a foundational capability for AGI to perform long-term planning in the real world.
Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.
The host predicts superintelligence won't just be a better reasoner but will come from merging latent spaces of different data types (text, vision, physics). This will give AI an intuitive, non-verbal "feel" for complex domains, much like a human knows where their arm is without calculation.