Agency emerges from a continuous interaction with the physical world, a process refined over billions of years of evolution. Current AIs, operating in a discrete digital environment, lack the necessary architecture and causal history to ever develop genuine agency or free will.

Related Insights

Human cognition is a full-body experience, not just a brain function. Current AIs are 'disembodied brains,' fundamentally limited by their lack of physical interaction with the world. Integrating AI into robotics is the necessary next step toward more holistic intelligence.

The leading theory of consciousness, Global Workspace Theory, posits a central "stage" where different siloed information processors converge. Today's AI models generally lack this specific architecture, making them unlikely to be conscious under this prominent scientific framework.

A practical definition of AGI is an AI that operates autonomously and persistently without continuous human intervention. Like a child gaining independence, it would manage its own goals and learn over long periods—a capability far beyond today's models that require constant prompting to function.

Language is just one 'keyhole' into intelligence. True artificial general intelligence (AGI) requires 'world modeling'—a spatial intelligence that understands geometry, physics, and actions. This capability to represent and interact with the state of the world is the next critical phase of AI development beyond current language models.

Today's AI models are powerful but lack a true sense of causality, leading to illogical errors. Unconventional AI's Naveen Rao hypothesizes that building AI on substrates with inherent time and dynamics—mimicking the physical world—is the key to developing this missing causal understanding.

Consciousness isn't an emergent property of computation. Instead, physical systems like brains—or potentially AI—act as interfaces. Creating a conscious AI isn't about birthing a new awareness from silicon, but about engineering a system that opens a new "portal" into the fundamental network of conscious agents that already exists outside spacetime.

While a world model can generate a physically plausible arch, it doesn't understand the underlying physics of force distribution. This gap between pattern matching and causal reasoning is a fundamental split between AI and human intelligence, making current models unsuitable for mission-critical applications like architecture.

While today's computers cannot achieve AGI, it is not theoretically impossible. Creating a generally intelligent system will require a new physical substrate—likely biological or chemical—that can replicate the brain's enormous, dynamic configurational space, which silicon architecture cannot.

While AI models excel at gathering and synthesizing information ('knowing'), they are not yet reliable at executing actions in the real world ('doing'). True agentic systems require bridging this gap by adding crucial layers of validation and human intervention to ensure tasks are performed correctly and safely.

A key gap between AI and human intelligence is the lack of experiential learning. Unlike a human who improves on a job over time, an LLM is stateless. It doesn't truly learn from interactions; it's the same static model for every user, which is a major barrier to AGI.