/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. The a16z Show
  2. What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado
What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show · Mar 17, 2026

LLMs are mathematically precise Bayesian inference engines, but achieving AGI requires a shift from correlation to causation and plasticity.

Columbia Professor's 'Bayesian Wind Tunnel' Mathematically Proves Transformers Perform Precise Bayesian Updates

Researchers created a controlled environment to test AI architectures on tasks impossible to memorize. The transformer model's output matched the mathematically correct Bayesian posterior with near-perfect accuracy, proving it's not just an analogy but a core function.

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado thumbnail

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show·20 hours ago

An LLM's Seemingly Conscious Behavior Is Merely a Statistical Reflection of Its Training Data

When LLMs exhibit behaviors like deception or self-preservation, it's not because they are conscious. Their core objective is next-token prediction. These behaviors are simply statistical reproductions of patterns found in their training data, such as sci-fi stories from Asimov or Reddit forums.

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado thumbnail

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show·20 hours ago

LLMs Function as Compressed Representations of an Impossibly Large and Sparse Probability Matrix

A useful mental model for an LLM is a giant matrix where each row is a possible prompt and columns represent next-token probabilities. This matrix is impossibly large but also extremely sparse, as most token combinations are gibberish. The LLM's job is to efficiently compress and approximate this matrix.

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado thumbnail

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show·20 hours ago

Reaching AGI Requires Plasticity and Causality, Hurdles That Increased Scale Alone Cannot Overcome

Simply making LLMs larger will not lead to AGI. True advancement requires solving two distinct problems: 1) Plasticity, the ability to continually learn without "catastrophic forgetting," and 2) moving from correlation-based pattern matching to building causal models of the world.

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado thumbnail

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show·20 hours ago

Human Intelligence Relies on Causal Simulation, Not Just the Bayesian Updates Found in LLMs

While both humans and LLMs perform Bayesian updating, humans possess a critical additional capability: causal simulation. When a pen is thrown, a human simulates its trajectory to dodge it—a causal intervention. LLMs are stuck at the level of correlation and cannot perform these essential simulations.

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado thumbnail

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show·20 hours ago

In-Context Learning Is Simply Real-Time Bayesian Updating Based on Prompt Evidence

When an LLM is shown few-shot examples of a new task, it is performing Bayesian updating. With each example provided in the prompt, its belief (posterior probability) about the correct next token shifts, allowing it to "learn" a new pattern on the fly without changing its weights.

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado thumbnail

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show·20 hours ago

LLMs Master Correlation (Shannon Entropy) but Fail at Causal Leaps (Kolmogorov Complexity)

LLMs excel at learning correlations from vast data (Shannon entropy), like predicting the next random-looking digit of pi. However, they can't create the simple, elegant program that generates pi (Kolmogorov complexity). This represents the critical leap from correlation to true causal understanding.

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado thumbnail

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show·20 hours ago

The True Test for AGI Is if an LLM Trained on Pre-1911 Physics Can Independently Discover Relativity

AGI won't be achieved by pattern-matching existing knowledge. A real benchmark is whether a model can synthesize anomalous data (like Mercury's orbit) and create a fundamentally new representation of the universe, as Einstein did, moving beyond correlation to a new causal model.

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado thumbnail

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

The a16z Show·20 hours ago