A Harvard study showed LLMs can predict planetary orbits (pattern fitting) but generate nonsensical force vectors when probed. This reveals a critical gap: current models mimic data patterns but don't develop a true, generalizable understanding of underlying physical laws, separating them from human intelligence.

Related Insights

A novel prompting technique involves instructing an AI to assume it knows nothing about a fundamental concept, like gender, before analyzing data. This "unlearning" process allows the AI to surface patterns from a truly naive perspective that is impossible for a human to replicate.

Today's AI models are powerful but lack a true sense of causality, leading to illogical errors. Unconventional AI's Naveen Rao hypothesizes that building AI on substrates with inherent time and dynamics—mimicking the physical world—is the key to developing this missing causal understanding.

Current AI can learn to predict complex patterns, like planetary orbits, from data. However, it struggles to abstract the underlying causal laws, such as Newtonian physics (F=MA). This leap to a higher level of abstraction remains a fundamental challenge beyond simple pattern recognition.

Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.

While a world model can generate a physically plausible arch, it doesn't understand the underlying physics of force distribution. This gap between pattern matching and causal reasoning is a fundamental split between AI and human intelligence, making current models unsuitable for mission-critical applications like architecture.

The most fundamental challenge in AI today is not scale or architecture, but the fact that models generalize dramatically worse than humans. Solving this sample efficiency and robustness problem is the true key to unlocking the next level of AI capabilities and real-world impact.

Current multimodal models shoehorn visual data into a 1D text-based sequence. True spatial intelligence is different. It requires a native 3D/4D representation to understand a world governed by physics, not just human-generated language. This is a foundational architectural shift, not an extension of LLMs.

A critical weakness of current AI models is their inefficient learning process. They require exponentially more experience—sometimes 100,000 times more data than a human encounters in a lifetime—to acquire their skills. This highlights a key difference from human cognition and a major hurdle for developing more advanced, human-like AI.

Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.

Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.