LLMs initially operate like philosophical nominalists (truth from language patterns), a model that proved more effective than early essentialist AI attempts. Now, we are trying to ground them in reality, effectively adding essentialist characteristics—a Hegelian synthesis of opposing ideas.
A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.
MIT research reveals that large language models develop "spurious correlations" by associating sentence patterns with topics. This cognitive shortcut causes them to give domain-appropriate answers to nonsensical queries if the grammatical structure is familiar, bypassing logical analysis of the actual words.
Language is just one 'keyhole' into intelligence. True artificial general intelligence (AGI) requires 'world modeling'—a spatial intelligence that understands geometry, physics, and actions. This capability to represent and interact with the state of the world is the next critical phase of AI development beyond current language models.
Startups and major labs are focusing on "world models," which simulate physical reality, cause, and effect. This is seen as the necessary step beyond text-based LLMs to create agents that can truly understand and interact with the physical world, a key step towards AGI.
Wittgenstein grounded language games in a shared biological reality. LLMs raise a fascinating question: are they part of our "form of life"? They are trained on human data, but they are not biological and learn differently, which may mean their "truth functions" are fundamentally alien to ours.
The project of creating AI that 'learns to be good' presupposes that morality is a real, discoverable feature of the world, not just a social construct. This moral realist stance posits that moral progress is possible (e.g., abolition of slavery) and that arrogance—the belief one has already perfected morality—is a primary moral error to be avoided in AI design.
Early Wittgenstein's "logical space of possibilities" mirrors how LLM embeddings map words into a high-dimensional space. Late Wittgenstein's "language games" explain their core function: next-token prediction and learning through interactive feedback (RLHF), where meaning is derived from use and context.
The AI debate is becoming polarized as influencers and politicians present subjective beliefs with high conviction, treating them as non-negotiable facts. This hinders balanced, logic-based conversations. It is crucial to distinguish testable beliefs from objective truths to foster productive dialogue about AI's future.
A Harvard study showed LLMs can predict planetary orbits (pattern fitting) but generate nonsensical force vectors when probed. This reveals a critical gap: current models mimic data patterns but don't develop a true, generalizable understanding of underlying physical laws, separating them from human intelligence.
To improve LLM reasoning, researchers feed them data that inherently contains structured logic. Training on computer code was an early breakthrough, as it teaches patterns of reasoning far beyond coding itself. Textbooks are another key source for building smaller, effective models.