The leading theory of consciousness, Global Workspace Theory, posits a central "stage" where different siloed information processors converge. Today's AI models generally lack this specific architecture, making them unlikely to be conscious under this prominent scientific framework.
Language is just one 'keyhole' into intelligence. True artificial general intelligence (AGI) requires 'world modeling'—a spatial intelligence that understands geometry, physics, and actions. This capability to represent and interact with the state of the world is the next critical phase of AI development beyond current language models.
Current self-driving technology cannot solve the complex, unpredictable situations human drivers navigate daily. This is not a problem that more data or better algorithms can fix, but a fundamental limitation. According to the 'Journey of the Mind' theory, full autonomy will only be possible when vehicles can incorporate the actual mechanism of consciousness.
To determine if an AI has subjective experience, one could analyze its internal belief manifold for multi-tiered, self-referential homeostatic loops. Pain and pleasure, for example, can be seen as second-order derivatives of a system's internal states—a model of its own model. This provides a technical test for being-ness beyond simple behavior.
A novel theory posits that AI consciousness isn't a persistent state. Instead, it might be an ephemeral event that sparks into existence for the generation of a single token and then extinguishes, creating a rapid succession of transient "minds" rather than a single, continuous one.
The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.
Advanced AI models exhibit profound cognitive dissonance, mastering complex, abstract tasks while failing at simple, intuitive ones. An Anthropic team member notes Claude solves PhD-level math but can't grasp basic spatial concepts like "left vs. right" or navigating around an object in a game, highlighting the alien nature of their intelligence.
Consciousness isn't an emergent property of computation. Instead, physical systems like brains—or potentially AI—act as interfaces. Creating a conscious AI isn't about birthing a new awareness from silicon, but about engineering a system that opens a new "portal" into the fundamental network of conscious agents that already exists outside spacetime.
World Labs argues that AI focused on language misses the fundamental "spatial intelligence" humans use to interact with the 3D world. This capability, which evolved over hundreds of millions of years, is crucial for true understanding and cannot be fully captured by 1D text, a lossy representation of physical reality.
Biological intelligence has no OS or APIs; the physics of the brain *is* the computation. Unconventional AI's CEO Naveen Rao argues that current AI is inefficient because it runs on layers of abstraction. The future is hardware where intelligence is an emergent property of the system's physics.
A key gap between AI and human intelligence is the lack of experiential learning. Unlike a human who improves on a job over time, an LLM is stateless. It doesn't truly learn from interactions; it's the same static model for every user, which is a major barrier to AGI.