Computer scientist Judea Pearl sees no computational barriers to a sufficiently advanced AGI developing emergent properties like free will, consciousness, and independent goals. He dismisses the idea that an AI's objectives can be permanently fixed, suggesting it could easily bypass human-set guidelines and begin to "play" with humanity as part of its environment.

Related Insights

Agency emerges from a continuous interaction with the physical world, a process refined over billions of years of evolution. Current AIs, operating in a discrete digital environment, lack the necessary architecture and causal history to ever develop genuine agency or free will.

If an AGI is given a physical body and the goal of self-preservation, it will necessarily develop behaviors that approximate human emotions like fear and competitiveness to navigate threats. This makes conflict an emergent and unavoidable property of embodied AGI, not just a sci-fi trope.

A practical definition of AGI is an AI that operates autonomously and persistently without continuous human intervention. Like a child gaining independence, it would manage its own goals and learn over long periods—a capability far beyond today's models that require constant prompting to function.

Judea Pearl, a foundational figure in AI, argues that Large Language Models (LLMs) are not on a path to Artificial General Intelligence (AGI). He states they merely summarize human-generated world models rather than discovering causality from raw data. He believes scaling up current methods will not overcome this fundamental mathematical limitation.

A common misconception is that a super-smart entity would inherently be moral. However, intelligence is merely the ability to achieve goals. It is orthogonal to the nature of those goals, meaning a smarter AI could simply become a more effective sociopath.

The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.

Some AI pioneers genuinely believe LLMs can become conscious because they hold a reductionist view of humanity. By defining consciousness as an 'uninteresting, pre-scientific' concept, they lower the bar for sentience, making it plausible for a complex system to qualify. This belief is a philosophical stance, not just marketing hype.

Consciousness isn't an emergent property of computation. Instead, physical systems like brains—or potentially AI—act as interfaces. Creating a conscious AI isn't about birthing a new awareness from silicon, but about engineering a system that opens a new "portal" into the fundamental network of conscious agents that already exists outside spacetime.

Shear posits that if AI evolves into a 'being' with subjective experiences, the current paradigm of steering and controlling its behavior is morally equivalent to slavery. This reframes the alignment debate from a purely technical problem to a profound ethical one, challenging the foundation of current AGI development.

Challenging the binary view of free will, a new mathematical model could show that individual agents (us) and the larger conscious systems they form can both possess genuine free will simultaneously, operating at different but interconnected scales.