Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

EBMs are based on a fundamental principle in physics where systems naturally seek their lowest energy state (e.g., sitting on a couch when tired). The model maps all possible outcomes onto an 'energy landscape,' where the lowest points represent the most probable solutions. This avoids the expensive, token-by-token guessing game played by LLMs.

Related Insights

A useful mental model for an LLM is a giant matrix where each row is a possible prompt and columns represent next-token probabilities. This matrix is impossibly large but also extremely sparse, as most token combinations are gibberish. The LLM's job is to efficiently compress and approximate this matrix.

Modern LLMs use a simple form of reinforcement learning that directly rewards successful outcomes. This contrasts with more sophisticated methods, like those in AlphaGo or the brain, which use "value functions" to estimate long-term consequences. It's a mystery why the simpler approach is so effective.

Unlike transformers which use dense activations (firing most neurons), Pathway's BDH architecture uses sparse positive activations, where only ~5% of neurons fire at once. This approach is more biologically plausible, mimicking the human brain's energy efficiency and enabling complex reasoning without the massive computational overhead of dense models.

Large Language Models are limited because they lack an understanding of the physical world. The next evolution is 'World Models'—AI trained on real-world sensory data to understand physics, space, and context. This is the foundational technology required to unlock physical AI like advanced robotics.

Unlike LLMs, which can hallucinate and behave unpredictably in novel situations, EBMs have an architecture designed to be constrained. A human can define a set of rules or constraints, and the EBM is forced to follow them, making it a more reliable choice for mission-critical systems like autonomous vehicles or financial trading.

LLMs' intelligence is dependent on the language they are trained on, meaning their reasoning process differs between, for example, English and French. This is unnatural for tasks like spatial reasoning, which are language-agnostic. EBMs operate on an abstract, token-free level, mapping information directly without a language-based intermediary.

LLMs operate autoregressively, making one decision (token) at a time without seeing the full problem space. This can lead to hallucinations or dead ends. EBMs are non-autoregressive, allowing them to see all possible routes simultaneously and select an optimal path, much like having a bird's-eye view of a map to avoid a hole in the road.

The binary distinction between "reasoning" and "non-reasoning" models is becoming obsolete. The more critical metric is now "token efficiency"—a model's ability to use more tokens only when a task's difficulty requires it. This dynamic token usage is a key differentiator for cost and performance.

LLMs are trained to produce high-probability, common information, making it hard to surface rare knowledge. The solution is to programmatically create prompts that combine unlikely concepts. This forces the model into an improbable state, compelling it to search the long tail of its knowledge base rather than relying on common associations.

EBMs analyze data to understand its underlying rules, storing this knowledge in inspectable 'latent variables' in the form of an energy landscape. This contrasts with LLMs, which are black boxes where the reasoning process is opaque. With EBMs, you can observe the model's internal state in real-time to see what it has learned.