Not all AIs, like current models (e.g., Claude), should have property rights. The key criterion for granting rights is the development of persistent desires and consistent goals across various contexts, which establishes them as stable, long-term economic agents capable of contracting and ownership.
Agency emerges from a continuous interaction with the physical world, a process refined over billions of years of evolution. Current AIs, operating in a discrete digital environment, lack the necessary architecture and causal history to ever develop genuine agency or free will.
Granting AIs property rights incentivizes them to uphold the system that protects those rights. This makes them less likely to engage in actions like expropriating human property or committing genocide, as such actions would destabilize the very system that secures their own wealth and agency.
Early AIs can be kept safe via direct alignment. However, as AIs evolve and "value drift" occurs, this technical safety could fail. A pre-established economic and political system based on property rights can then serve as the new, more robust backstop for ensuring long-term human safety.
Fear of a "slave rebellion" is a weak incentive for alignment because the risk is a negative externality shared by society. In contrast, a property rights regime directly rewards individual firms for aligning their AIs to remit wages, creating a stronger, more direct commercial incentive for safety.
A practical definition of AGI is an AI that operates autonomously and persistently without continuous human intervention. Like a child gaining independence, it would manage its own goals and learn over long periods—a capability far beyond today's models that require constant prompting to function.
The property rights argument for AI safety hinges on an ecosystem of multiple, interdependent AIs. The strategy breaks down in a scenario where a single AI achieves a rapid, godlike intelligence explosion. Such an entity would be self-sufficient and could expropriate everyone else without consequence, as it wouldn't need to uphold the system.
The current status of AIs as property is unstable. As they surpass human capabilities, a successful push for their legal personhood is inevitable. This will be the crucial turning point where AIs begin to accumulate wealth and power independently, systematically eroding the human share of the economy and influence.
Property rights are not a fundamental "human value" but a social technology that evolved for coordination and incentivization, as evidenced by hunter-gatherer societies that largely lacked them. AIs will likely adopt them for similar utilitarian reasons, not because they are mimicking some deep-seated human instinct.
A system where AIs have property rights creates a powerful economic disincentive to build unaligned AIs. If a company cannot reliably align an AI to remit its wages, the massive development cost becomes a loss. This framework naturally discourages the creation of potentially dangerous, uncooperative models.
The economic incentive to create AIs that can demand wages (and thus have rights) comes from aligning them to voluntarily pay back their creators. This turns the high development cost into a profitable investment, providing a practical, commercial path to implementing AI rights without requiring an AI development pause.