The 'attention' mechanism in AI has roots in 1990s robotics. Dr. Wallace built a robotic eye with high resolution at its center and lower resolution in the periphery. The system detected 'interesting' data (e.g., movement) in the periphery and rapidly shifted its high-resolution gaze—its 'attention'—to that point, a physical analog to how LLMs weigh words.
While LLMs dominate headlines, Dr. Fei-Fei Li argues that "spatial intelligence"—the ability to understand and interact with the 3D world—is the critical, underappreciated next step for AI. This capability is the linchpin for unlocking meaningful advances in robotics, design, and manufacturing.
Drawing a parallel to the Cambrian Explosion, where vision evolved alongside nervous systems, Dr. Li argues that perception's primary purpose is to enable action and interaction. This principle suggests that for AI to advance, particularly in robotics, computer vision must be developed as the foundation for embodied intelligence, not just for classification.
The "Attention is All You Need" paper's key breakthrough was an architecture designed for massive scalability across GPUs. This focus on efficiency, anticipating the industry's shift to larger models, was more crucial to its dominance than the attention mechanism itself.
Current LLMs abstract language into discrete tokens, losing rich information like font, layout, and spatial arrangement. A "pixel maximalist" view argues that processing visual representations of text (as humans do) is a more lossless, general approach that captures the physical manifestation of language in the world.
The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.
Ken Goldberg quantifies the challenge: the text data used to train LLMs would take a human 100,000 years to read. Equivalent data for robot manipulation (vision-to-control signals) doesn't exist online and must be generated from scratch, explaining the slower progress in physical AI.
Google's Titans architecture for LLMs mimics human memory by applying Claude Shannon's information theory. It scans vast data streams and identifies "surprise"—statistically unexpected or rare information relative to its training data. This novel data is then prioritized for long-term memory, preventing clutter from irrelevant information.
The 2017 introduction of "transformers" revolutionized AI. Instead of being trained on the specific meaning of each word, models began learning the contextual relationships between words. This allowed AI to predict the next word in a sequence without needing a formal dictionary, leading to more generalist capabilities.
Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.
Unlike older robots requiring precise maps and trajectory calculations, new robots use internet-scale common sense and learn motion by mimicking humans or simulations. This combination has “wiped the slate clean” for what is possible in the field.