Current multimodal models shoehorn visual data into a 1D text-based sequence. True spatial intelligence is different. It requires a native 3D/4D representation to understand a world governed by physics, not just human-generated language. This is a foundational architectural shift, not an extension of LLMs.
Historically, computer vision treated 3D reconstruction (capturing reality) and generation (creating content) as separate fields. New techniques like NeRFs are merging them, creating a unified approach where models can seamlessly move between perceiving and imagining 3D spaces. This represents a major paradigm shift.
While LLMs dominate headlines, Dr. Fei-Fei Li argues that "spatial intelligence"—the ability to understand and interact with the 3D world—is the critical, underappreciated next step for AI. This capability is the linchpin for unlocking meaningful advances in robotics, design, and manufacturing.
Large language models are insufficient for tasks requiring real-world interaction and spatial understanding, like robotics or disaster response. World models provide this missing piece by generating interactive, reason-able 3D environments. They represent a foundational shift from language-based AI to a more holistic, spatially intelligent AI.
A common misconception is that Transformers are sequential models like RNNs. Fundamentally, they are permutation-equivariant and operate on sets of tokens. Sequence information is artificially injected via positional embeddings, making the architecture inherently flexible for non-linear data like 3D scenes or graphs.
The core transformer architecture is permutation-equivariant and operates on sets of tokens, not ordered sequences. Sequentiality is an add-on via positional embeddings, making transformers naturally suited for non-linear data structures like 3D worlds, a concept many practitioners overlook.
World Labs argues that AI focused on language misses the fundamental "spatial intelligence" humans use to interact with the 3D world. This capability, which evolved over hundreds of millions of years, is crucial for true understanding and cannot be fully captured by 1D text, a lossy representation of physical reality.
World Labs co-founder Fei-Fei Li posits that spatial intelligence—the ability to reason and interact in 3D space—is a distinct and complementary form of intelligence to language. This capability is essential for tasks like robotic manipulation and scientific discovery that cannot be reduced to linguistic descriptions.
AR and robotics are bottlenecked by software's inability to truly understand the 3D world. Spatial intelligence is positioned as the fundamental operating system that connects a device's digital "brain" to physical reality. This layer is crucial for enabling meaningful interaction and maturing the hardware platforms.
Contrary to common perception shaped by their use in language, Transformers are not inherently sequential. Their core architecture operates on sets of tokens, with sequence information only injected via positional embeddings. This makes them powerful for non-sequential data like 3D objects or other unordered collections.
Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.