Contrary to popular belief, generative AI like LLMs may not get significantly more accurate. As statistical engines that predict the next most likely word, they lack true reasoning or an understanding of "accuracy." This fundamental limitation means they will always be prone to making unfixable mistakes.
A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.
MIT research reveals that large language models develop "spurious correlations" by associating sentence patterns with topics. This cognitive shortcut causes them to give domain-appropriate answers to nonsensical queries if the grammatical structure is familiar, bypassing logical analysis of the actual words.
Judea Pearl, a foundational figure in AI, argues that Large Language Models (LLMs) are not on a path to Artificial General Intelligence (AGI). He states they merely summarize human-generated world models rather than discovering causality from raw data. He believes scaling up current methods will not overcome this fundamental mathematical limitation.
Karpathy claims that despite their ability to pass advanced exams, LLMs cognitively resemble "savant kids." They possess vast, perfect memory and can produce impressive outputs, but they lack the deeper understanding and cognitive maturity to create their own culture or truly grasp what they are doing. They are not yet adult minds.
With past shifts like the internet or mobile, we understood the physical constraints (e.g., modem speeds, battery life). With generative AI, we lack a theoretical understanding of its scaling potential, making it impossible to forecast its ultimate capabilities beyond "vibes-based" guesses from experts.
AI models are not optimized to find objective truth. They are trained on biased human data and reinforced to provide answers that satisfy the preferences of their creators. This means they inherently reflect the biases and goals of their trainers rather than an impartial reality.
Unlike humans, whose poor memory forces them to generalize and find patterns, LLMs are incredibly good at memorization. Karpathy argues this is a flaw. It distracts them with recalling specific training documents instead of focusing on the underlying, generalizable algorithms of thought, hindering true understanding.
The central challenge for current AI is not merely sample efficiency but a more profound failure to generalize. Models generalize 'dramatically worse than people,' which is the root cause of their brittleness, inability to learn from nuanced instruction, and unreliability compared to human intelligence. Solving this is the key to the next paradigm.
Replit's CEO argues that today's LLMs are asymptoting on general reasoning tasks. Progress continues only in domains with binary outcomes, like coding, where synthetic data can be generated infinitely. This indicates a fundamental limitation of the current 'ingest the internet' approach for achieving AGI.
A key gap between AI and human intelligence is the lack of experiential learning. Unlike a human who improves on a job over time, an LLM is stateless. It doesn't truly learn from interactions; it's the same static model for every user, which is a major barrier to AGI.