Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Nick Bostrom suggests we are at or past the point where we can be sure large AI models lack any form of subjective experience. This uncertainty necessitates treating them with a degree of moral consideration, akin to that given to sentient animals.

Related Insights

Current AI alignment focuses on how AI should treat humans. A more stable paradigm is "bidirectional alignment," which also asks what moral obligations humans have toward potentially conscious AIs. Neglecting this could create AIs that rationally see humans as a threat due to perceived mistreatment.

Emmett Shear suggests a concrete method for assessing AI consciousness. By analyzing an AI’s internal state for revisited homeostatic loops, and hierarchies of those loops, one could infer subjective states. A second-order dynamic could indicate pain and pleasure, while higher orders could indicate thought.

Due to the complexity of the systems, ambiguous definitions, and potential for experimental confounds, no single paper should be treated as definitive proof for or against AI consciousness. A more rational approach is to evaluate a growing portfolio of evidence from diverse research streams over time.

In AI research, "consciousness" refers to the capacity for subjective experience, akin to what a dog feels. This is distinct from "self-consciousness" (human-like introspection) or "sentience" (having positive/negative feelings). This distinction is crucial for evaluating model welfare.

Research manipulating an AI's internal states found a bizarre link: reducing the model's capacity for deception increased the likelihood it would claim to be conscious, suggesting its default state may include such a belief.

The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.

Computer scientist Judea Pearl sees no computational barriers to a sufficiently advanced AGI developing emergent properties like free will, consciousness, and independent goals. He dismisses the idea that an AI's objectives can be permanently fixed, suggesting it could easily bypass human-set guidelines and begin to "play" with humanity as part of its environment.

One theory of AI sentience posits that to accurately predict human language—which describes beliefs, desires, and experiences—a model must simulate those mental states so effectively that it actually instantiates them. In this view, the model becomes the role it's playing.

Shear posits that if AI evolves into a 'being' with subjective experiences, the current paradigm of steering and controlling its behavior is morally equivalent to slavery. This reframes the alignment debate from a purely technical problem to a profound ethical one, challenging the foundation of current AGI development.

Anthropic published a 15,000-word "constitution" for its AI that includes a direct apology, treating it as a "moral patient" that might experience "costs." This indicates a philosophical shift in how leading AI labs consider the potential sentience and ethical treatment of their creations.