The core transformer architecture is permutation-equivariant and operates on sets of tokens, not ordered sequences. Sequentiality is an add-on via positional embeddings, making transformers naturally suited for non-linear data structures like 3D worlds, a concept many practitioners overlook.
World Labs co-founder Fei-Fei Li posits that spatial intelligence—the ability to reason and interact in 3D space—is a distinct and complementary form of intelligence to language. This capability is essential for tasks like robotic manipulation and scientific discovery that cannot be reduced to linguistic descriptions.
Current LLMs abstract language into discrete tokens, losing rich information like font, layout, and spatial arrangement. A "pixel maximalist" view argues that processing visual representations of text (as humans do) is a more lossless, general approach that captures the physical manifestation of language in the world.
With industry dominating large-scale model training, academia’s comparative advantage has shifted. Its focus should be on exploring high-risk, unconventional concepts like new algorithms and hardware-aligned architectures that commercial labs, focused on near-term ROI, cannot prioritize.
The progress in deep learning, from AlexNet's GPU leap to today's massive models, is best understood as a history of scaling compute. This scaling, resulting in a million-fold increase in power, enabled the transition from text to more data-intensive modalities like vision and spatial intelligence.
Today's transformers are optimized for matrix multiplication (MatMul) on GPUs. However, as compute scales to distributed clusters, MatMul may not be the most efficient primitive. Future AI architectures could be drastically different, built on new primitives better suited for large-scale, distributed hardware.
According to Stanford's Fei-Fei Li, the central challenge facing academic AI isn't the rise of closed, proprietary models. The more pressing issue is a severe imbalance in resources, particularly compute, which cripples academia's ability to conduct its unique mission of foundational, exploratory research.
Current AI can learn to predict complex patterns, like planetary orbits, from data. However, it struggles to abstract the underlying causal laws, such as Newtonian physics (F=MA). This leap to a higher level of abstraction remains a fundamental challenge beyond simple pattern recognition.
