Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

LLMs' intelligence is dependent on the language they are trained on, meaning their reasoning process differs between, for example, English and French. This is unnatural for tasks like spatial reasoning, which are language-agnostic. EBMs operate on an abstract, token-free level, mapping information directly without a language-based intermediary.

Related Insights

LLMs predict the next token in a sequence. The brain's cortex may function as a general prediction engine capable of "omnidirectional inference"—predicting any missing information from any available subset of inputs, not just what comes next. This offers a more flexible and powerful form of reasoning.

Top LLMs like Claude 3 and DeepSeek score 0% on complex Sudoku puzzles, a task humans can solve. This isn't a minor flaw but a categorical failure, exposing the transformer architecture's inability to handle constraint satisfaction problems that require backtracking and parallel reasoning, unlike its sequential, token-by-token processing.

LLMs operate autoregressively, making one decision (token) at a time without seeing the full problem space. This can lead to hallucinations or dead ends. EBMs are non-autoregressive, allowing them to see all possible routes simultaneously and select an optimal path, much like having a bird's-eye view of a map to avoid a hole in the road.

Beyond the obvious lack of non-English training data, Large Language Models are architecturally biased. Their tokenization process, designed for English, inefficiently breaks down other languages into more fragments. This increases operational costs and reduces comprehension, creating a structural disadvantage.

World Labs argues that AI focused on language misses the fundamental "spatial intelligence" humans use to interact with the 3D world. This capability, which evolved over hundreds of millions of years, is crucial for true understanding and cannot be fully captured by 1D text, a lossy representation of physical reality.

Models built for multilingual use, like Meta's LLaMA, don't necessarily "think" in multiple languages. They often retrieve answers internally in English and then translate back to the source language. This extra step introduces significant opportunities for error, undermining their multilingual promise and losing knowledge in translation.

To prove the flaw, researchers ran two tests. In one, they used nonsensical words in a familiar sentence structure, and the LLM still gave a domain-appropriate answer. In the other, they used a known fact in an unfamiliar structure, causing the model to fail. This definitively proved the model's dependency on syntax over semantics.

To improve LLM reasoning, researchers feed them data that inherently contains structured logic. Training on computer code was an early breakthrough, as it teaches patterns of reasoning far beyond coding itself. Textbooks are another key source for building smaller, effective models.

EBMs analyze data to understand its underlying rules, storing this knowledge in inspectable 'latent variables' in the form of an energy landscape. This contrasts with LLMs, which are black boxes where the reasoning process is opaque. With EBMs, you can observe the model's internal state in real-time to see what it has learned.

Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.

Logical Intelligence Argues LLM Reasoning Is Flawed Because It's Tethered to Specific Human Languages | RiffOn