Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Mark Cuban believes today's AI is far from AGI because it has no understanding of consequence. He contrasts it with a toddler who learns that pushing a cup off a high chair elicits a specific reaction from a parent. This fundamental inability to model real-world cause-and-effect is a key limitation of current LLMs.

Related Insights

A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.

Judea Pearl, a foundational figure in AI, argues that Large Language Models (LLMs) are not on a path to Artificial General Intelligence (AGI). He states they merely summarize human-generated world models rather than discovering causality from raw data. He believes scaling up current methods will not overcome this fundamental mathematical limitation.

Today's AI models are powerful but lack a true sense of causality, leading to illogical errors. Unconventional AI's Naveen Rao hypothesizes that building AI on substrates with inherent time and dynamics—mimicking the physical world—is the key to developing this missing causal understanding.

Simply making LLMs larger will not lead to AGI. True advancement requires solving two distinct problems: 1) Plasticity, the ability to continually learn without "catastrophic forgetting," and 2) moving from correlation-based pattern matching to building causal models of the world.

Karpathy claims that despite their ability to pass advanced exams, LLMs cognitively resemble "savant kids." They possess vast, perfect memory and can produce impressive outputs, but they lack the deeper understanding and cognitive maturity to create their own culture or truly grasp what they are doing. They are not yet adult minds.

While both humans and LLMs perform Bayesian updating, humans possess a critical additional capability: causal simulation. When a pen is thrown, a human simulates its trajectory to dodge it—a causal intervention. LLMs are stuck at the level of correlation and cannot perform these essential simulations.

AI can process vast information but cannot replicate human common sense, which is the sum of lived experiences. This gap makes it unreliable for tasks requiring nuanced judgment, authenticity, and emotional understanding, posing a significant risk to brand trust when used without oversight.

Citing the president of the Santa Fe Institute, investor James Anderson argues that current AI is the "opposite of intelligence." It excels at looking up information from a vast library of data, but it cannot think through problems from first principles. True breakthroughs will require a different architecture and a longer time horizon.

Cuban isn't worried about a "Terminator" scenario because current AI lacks a true understanding of real-world physics and consequences. He uses the analogy of a toddler knowing what happens when they push a sippy cup off a high chair—a level of cause-and-effect reasoning that models cannot yet replicate.

A key gap between AI and human intelligence is the lack of experiential learning. Unlike a human who improves on a job over time, an LLM is stateless. It doesn't truly learn from interactions; it's the same static model for every user, which is a major barrier to AGI.

Mark Cuban: Current AI Lacks True Intelligence Because It Can't Grasp Consequences | RiffOn