Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

As Daoist philosophy illustrates, language is merely the trace left behind by reality, not reality itself. Since LLMs are confined to this linguistic shadow, they can exhibit intelligence but never access true wisdom, which exists beyond words and symbolic representation.

Related Insights

A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.

The complexity in LLMs isn't intelligence emerging in silicon; it reflects our own. These models are deep because they encode the vast, causally powerful structure of human language and culture. We are looking at a high-resolution imprint of our own collective mind.

When AI pioneers like Geoffrey Hinton see agency in an LLM, they are misinterpreting the output. What they are actually witnessing is a compressed, probabilistic reflection of the immense creativity and knowledge from all the humans who created its training data. It's an echo, not a mind.

Large Language Models struggle with obvious, real-world facts because their training data (text) over-represents uncertain topics open to debate—the 'maybe sphere.' Bedrock, common-sense knowledge is rarely written down, leaving a significant gap in the AI's world model and creating a need for human oversight on obvious matters.

Judea Pearl, a foundational figure in AI, argues that Large Language Models (LLMs) are not on a path to Artificial General Intelligence (AGI). He states they merely summarize human-generated world models rather than discovering causality from raw data. He believes scaling up current methods will not overcome this fundamental mathematical limitation.

Karpathy claims that despite their ability to pass advanced exams, LLMs cognitively resemble "savant kids." They possess vast, perfect memory and can produce impressive outputs, but they lack the deeper understanding and cognitive maturity to create their own culture or truly grasp what they are doing. They are not yet adult minds.

LLMs' intelligence is dependent on the language they are trained on, meaning their reasoning process differs between, for example, English and French. This is unnatural for tasks like spatial reasoning, which are language-agnostic. EBMs operate on an abstract, token-free level, mapping information directly without a language-based intermediary.

Language models work by identifying subtle, implicit patterns in human language that even linguists cannot fully articulate. Their success broadens our definition of "knowledge" to include systems that can embody and use information without the explicit, symbolic understanding that humans traditionally require.

Humans evolved to think and have experiences long before they developed language for output. In contrast, LLMs are trained solely on input-output tasks and don't 'sit around thinking.' This absence of non-communicative internal processing represents a core difference in their potential psychology.

LLMs initially operate like philosophical nominalists (truth from language patterns), a model that proved more effective than early essentialist AI attempts. Now, we are trying to ground them in reality, effectively adding essentialist characteristics—a Hegelian synthesis of opposing ideas.