Contrary to common perception shaped by their use in language, Transformers are not inherently sequential. Their core architecture operates on sets of tokens, with sequence information only injected via positional embeddings. This makes them powerful for non-sequential data like 3D objects or other unordered collections.

Related Insights

While more data and compute yield linear improvements, true step-function advances in AI come from unpredictable algorithmic breakthroughs like Transformers. These creative ideas are the most difficult to innovate on and represent the highest-leverage, yet riskiest, area for investment and research focus.

While LLMs dominate headlines, Dr. Fei-Fei Li argues that "spatial intelligence"—the ability to understand and interact with the 3D world—is the critical, underappreciated next step for AI. This capability is the linchpin for unlocking meaningful advances in robotics, design, and manufacturing.

Large language models are insufficient for tasks requiring real-world interaction and spatial understanding, like robotics or disaster response. World models provide this missing piece by generating interactive, reason-able 3D environments. They represent a foundational shift from language-based AI to a more holistic, spatially intelligent AI.

A common misconception is that Transformers are sequential models like RNNs. Fundamentally, they are permutation-equivariant and operate on sets of tokens. Sequence information is artificially injected via positional embeddings, making the architecture inherently flexible for non-linear data like 3D scenes or graphs.

The core transformer architecture is permutation-equivariant and operates on sets of tokens, not ordered sequences. Sequentiality is an add-on via positional embeddings, making transformers naturally suited for non-linear data structures like 3D worlds, a concept many practitioners overlook.

World Labs argues that AI focused on language misses the fundamental "spatial intelligence" humans use to interact with the 3D world. This capability, which evolved over hundreds of millions of years, is crucial for true understanding and cannot be fully captured by 1D text, a lossy representation of physical reality.

World Labs co-founder Fei-Fei Li posits that spatial intelligence—the ability to reason and interact in 3D space—is a distinct and complementary form of intelligence to language. This capability is essential for tasks like robotic manipulation and scientific discovery that cannot be reduced to linguistic descriptions.

Current multimodal models shoehorn visual data into a 1D text-based sequence. True spatial intelligence is different. It requires a native 3D/4D representation to understand a world governed by physics, not just human-generated language. This is a foundational architectural shift, not an extension of LLMs.

Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.

Programming is not a linear, left-to-right task; developers constantly check bidirectional dependencies. Transformers' sequential reasoning is a poor match. Diffusion models, which can refine different parts of code simultaneously, offer a more natural and potentially superior architecture for coding tasks.

Transformers Are Fundamentally Models of Sets, Not Sequences | RiffOn