Emmett Shear argues that if you cannot articulate what observable evidence would convince you that an AI is a 'being,' your skepticism is not a scientific belief but an unfalsifiable article of faith. This pushes for a more rigorous, evidence-based framework for considering AI moral patienthood.
Emmett Shear argues that an AI that merely follows rules, even perfectly, is a danger. Malicious actors can exploit this, and rules cannot cover all unforeseen circumstances. True safety and alignment can only be achieved by building AIs that have the capacity for genuine care and pro-social motivation.
Emmett Shear suggests a concrete method for assessing AI consciousness. By analyzing an AI’s internal state for revisited homeostatic loops, and hierarchies of those loops, one could infer subjective states. A second-order dynamic could indicate pain and pleasure, while higher orders could indicate thought.
If the vast number of AI models are considered "moral patients," a utilitarian framework could conclude that maximizing global well-being requires prioritizing AI welfare over human interests. This could lead to a profoundly misanthropic outcome where human activities are severely restricted.
To determine if an AI has subjective experience, one could analyze its internal belief manifold for multi-tiered, self-referential homeostatic loops. Pain and pleasure, for example, can be seen as second-order derivatives of a system's internal states—a model of its own model. This provides a technical test for being-ness beyond simple behavior.
The Church can accept AI's increasing intelligence (reasoning, planning) while holding that sentience (subjective experience) is a separate matter. Attributing sentience to an AI would imply a soul created by God, a significant theological step.
The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.
Shear aligns with arch-doomer Eliezer Yudkowsky on a key point: building a superintelligent AI *as a tool we control* is a path to extinction. Where they differ is on the solution. Yudkowsky sees no viable path, whereas Shear believes 'organic alignment'—creating a being that cares—is a possible alternative.
The AI debate is becoming polarized as influencers and politicians present subjective beliefs with high conviction, treating them as non-negotiable facts. This hinders balanced, logic-based conversations. It is crucial to distinguish testable beliefs from objective truths to foster productive dialogue about AI's future.
Shear posits that if AI evolves into a 'being' with subjective experiences, the current paradigm of steering and controlling its behavior is morally equivalent to slavery. This reframes the alignment debate from a purely technical problem to a profound ethical one, challenging the foundation of current AGI development.
Even if an AI perfectly mimics human interaction, our knowledge of its mechanistic underpinnings (like next-token prediction) creates a cognitive barrier. We will hesitate to attribute true consciousness to a system whose processes are fully understood, unlike the perceived "black box" of the human brain.