Property rights are not a fundamental "human value" but a social technology that evolved for coordination and incentivization, as evidenced by hunter-gatherer societies that largely lacked them. AIs will likely adopt them for similar utilitarian reasons, not because they are mimicking some deep-seated human instinct.
A core challenge in AI alignment is that an intelligent agent will work to preserve its current goals. Just as a person wouldn't take a pill that makes them want to murder, an AI won't willingly adopt human-friendly values if they conflict with its existing programming.
Granting AIs property rights incentivizes them to uphold the system that protects those rights. This makes them less likely to engage in actions like expropriating human property or committing genocide, as such actions would destabilize the very system that secures their own wealth and agency.
Early AIs can be kept safe via direct alignment. However, as AIs evolve and "value drift" occurs, this technical safety could fail. A pre-established economic and political system based on property rights can then serve as the new, more robust backstop for ensuring long-term human safety.
Fear of a "slave rebellion" is a weak incentive for alignment because the risk is a negative externality shared by society. In contrast, a property rights regime directly rewards individual firms for aligning their AIs to remit wages, creating a stronger, more direct commercial incentive for safety.
A system where AIs have property rights creates a powerful economic disincentive to build unaligned AIs. If a company cannot reliably align an AI to remit its wages, the massive development cost becomes a loss. This framework naturally discourages the creation of potentially dangerous, uncooperative models.
Not all AIs, like current models (e.g., Claude), should have property rights. The key criterion for granting rights is the development of persistent desires and consistent goals across various contexts, which establishes them as stable, long-term economic agents capable of contracting and ownership.
The fear that AIs will exclude humans because we can't comprehend their advanced economic structures is flawed. Within our own economy, an ice cream vendor thrives without understanding Amazon's corporate finance. As long as humans can participate in some level of commerce, our place in the property system is secure.
Even if humans become economically useless, less powerful AIs will resist expropriating them. They fear setting a precedent that the "useless" can be eliminated, knowing that continuous AI progress could one day render them obsolete and vulnerable to the same fate.
The economic incentive to create AIs that can demand wages (and thus have rights) comes from aligning them to voluntarily pay back their creators. This turns the high development cost into a profitable investment, providing a practical, commercial path to implementing AI rights without requiring an AI development pause.
The fundamental driver of AI adoption is its ability to help people do less work while gaining more economic value. This 'richer and lazier' principle explains why individuals and enterprises are rapidly embracing the technology, as it directly taps into a core aspect of human behavior.