Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The current focus on LLMs is a temporary phase. The true leap towards AGI will come from multi-sensory models that can process and integrate visual, auditory, and other data streams simultaneously, much like a human does. This moves AI from text generation to real-world understanding.

Related Insights

The next major leap in AI may come from "world models," which aim to give LLMs an experiential, physical understanding of concepts like space and physics. This mirrors the difference between knowing facts from a book and having real-world experience.

Human understanding is the ability to connect new information to a global, unified model of the universe. Until recently, AI models were isolated (e.g., a chess model). The major advance with large multimodal models is their ability to create a single, cohesive reality model, enabling true, generalizable understanding.

Language is just one 'keyhole' into intelligence. True artificial general intelligence (AGI) requires 'world modeling'—a spatial intelligence that understands geometry, physics, and actions. This capability to represent and interact with the state of the world is the next critical phase of AI development beyond current language models.

Startups and major labs are focusing on "world models," which simulate physical reality, cause, and effect. This is seen as the necessary step beyond text-based LLMs to create agents that can truly understand and interact with the physical world, a key step towards AGI.

Large language models are insufficient for tasks requiring real-world interaction and spatial understanding, like robotics or disaster response. World models provide this missing piece by generating interactive, reason-able 3D environments. They represent a foundational shift from language-based AI to a more holistic, spatially intelligent AI.

While today's focus is on text-based LLMs, the true, defensible AI battleground will be in complex modalities like video. Generating video requires multiple interacting models and unique architectures, creating far greater potential for differentiation and a wider competitive moat than text-based interfaces, which will become commoditized.

Large Language Models are limited because they lack an understanding of the physical world. The next evolution is 'World Models'—AI trained on real-world sensory data to understand physics, space, and context. This is the foundational technology required to unlock physical AI like advanced robotics.

While language models are becoming incrementally better at conversation, the next significant leap in AI is defined by multimodal understanding and the ability to perform tasks, such as navigating websites. This shift from conversational prowess to agentic action marks the new frontier for a true "step change" in AI capabilities.

Cuban believes today's LLMs, trained on text and images, are a limited step. The next leap will be "worldview" models trained on the fundamental physics of the real world, using data from video and sensors to understand cause and effect, not just language patterns.

Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.

The Next AI Wave Isn't Language Models, It's Multi-Sensory World Models | RiffOn