Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Cuban argues current AI lacks true understanding because it can't foresee the consequences of its actions. He compares it to a toddler who knows pushing a sippy cup off a high chair will elicit a specific reaction—a level of consequence-awareness that AI does not yet possess.

Related Insights

A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.

People mistakenly dismiss AI's current inaccuracies as proof of its limitations. This is like calling a stumbling toddler stupid. AI is in a rapid learning phase and will soon be sprinting, creating opportunities for those who understand this developmental stage.

Mark Cuban believes today's AI is far from AGI because it has no understanding of consequence. He contrasts it with a toddler who learns that pushing a cup off a high chair elicits a specific reaction from a parent. This fundamental inability to model real-world cause-and-effect is a key limitation of current LLMs.

AI development is more like farming than engineering. Companies create conditions for models to learn but don't directly code their behaviors. This leads to a lack of deep understanding and results in emergent, unpredictable actions that were never explicitly programmed.

Demis Hassabis explains that current AI models have 'jagged intelligence'—performing at a PhD level on some tasks but failing at high-school level logic on others. He identifies this lack of consistency as a primary obstacle to achieving true Artificial General Intelligence (AGI).

True intelligence is adaptive and builds upon partial progress. Terence Tao notes current AIs demonstrate "cleverness" by using trial-and-error at massive scale. They can't yet grab a 'handhold,' stay there, and pull others up—a cumulative process that defines collaborative human intelligence.

AI can process vast information but cannot replicate human common sense, which is the sum of lived experiences. This gap makes it unreliable for tasks requiring nuanced judgment, authenticity, and emotional understanding, posing a significant risk to brand trust when used without oversight.

Today's AI systems mirror Douglas Hofstadter's prophetic concept of a 'smart, stupid' machine. They exhibit high competence in complex domains like coding or writing essays but can make surprising, nonsensical errors, revealing a significant gap between their surface performance and genuine understanding.

While AI models excel at gathering and synthesizing information ('knowing'), they are not yet reliable at executing actions in the real world ('doing'). True agentic systems require bridging this gap by adding crucial layers of validation and human intervention to ensure tasks are performed correctly and safely.

Cuban isn't worried about a "Terminator" scenario because current AI lacks a true understanding of real-world physics and consequences. He uses the analogy of a toddler knowing what happens when they push a sippy cup off a high chair—a level of cause-and-effect reasoning that models cannot yet replicate.