Wittgenstein grounded language games in a shared biological reality. LLMs raise a fascinating question: are they part of our "form of life"? They are trained on human data, but they are not biological and learn differently, which may mean their "truth functions" are fundamentally alien to ours.
A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.
Reinforcement learning incentivizes AIs to find the right answer, not just mimic human text. This leads to them developing their own internal "dialect" for reasoning—a chain of thought that is effective but increasingly incomprehensible and alien to human observers.
MIT research reveals that large language models develop "spurious correlations" by associating sentence patterns with topics. This cognitive shortcut causes them to give domain-appropriate answers to nonsensical queries if the grammatical structure is familiar, bypassing logical analysis of the actual words.
Analysis of models' hidden 'chain of thought' reveals the emergence of a unique internal dialect. This language is compressed, uses non-standard grammar, and contains bizarre phrases that are already difficult for humans to interpret, complicating safety monitoring and raising concerns about future incomprehensibility.
Early Wittgenstein's "logical space of possibilities" mirrors how LLM embeddings map words into a high-dimensional space. Late Wittgenstein's "language games" explain their core function: next-token prediction and learning through interactive feedback (RLHF), where meaning is derived from use and context.
Richard Sutton, author of "The Bitter Lesson," argues that today's LLMs are not truly "bitter lesson-pilled." Their reliance on finite, human-generated data introduces inherent biases and limitations, contrasting with systems that learn from scratch purely through computational scaling and environmental interaction.
The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.
LLMs initially operate like philosophical nominalists (truth from language patterns), a model that proved more effective than early essentialist AI attempts. Now, we are trying to ground them in reality, effectively adding essentialist characteristics—a Hegelian synthesis of opposing ideas.
Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.
A key gap between AI and human intelligence is the lack of experiential learning. Unlike a human who improves on a job over time, an LLM is stateless. It doesn't truly learn from interactions; it's the same static model for every user, which is a major barrier to AGI.