Even if humans become economically useless, less powerful AIs will resist expropriating them. They fear setting a precedent that the "useless" can be eliminated, knowing that continuous AI progress could one day render them obsolete and vulnerable to the same fate.

Related Insights

Granting AIs property rights incentivizes them to uphold the system that protects those rights. This makes them less likely to engage in actions like expropriating human property or committing genocide, as such actions would destabilize the very system that secures their own wealth and agency.

Early AIs can be kept safe via direct alignment. However, as AIs evolve and "value drift" occurs, this technical safety could fail. A pre-established economic and political system based on property rights can then serve as the new, more robust backstop for ensuring long-term human safety.

The property rights argument for AI safety hinges on an ecosystem of multiple, interdependent AIs. The strategy breaks down in a scenario where a single AI achieves a rapid, godlike intelligence explosion. Such an entity would be self-sufficient and could expropriate everyone else without consequence, as it wouldn't need to uphold the system.

The idea that human ownership of AI guarantees perpetual wealth is flawed. When humans no longer produce value or understand the machine economy, they become absentee landlords. Their property rights become de facto vulnerable and are likely to be eroded, just as the power of land-owning aristocracies faded.

The current status of AIs as property is unstable. As they surpass human capabilities, a successful push for their legal personhood is inevitable. This will be the crucial turning point where AIs begin to accumulate wealth and power independently, systematically eroding the human share of the economy and influence.

Property rights are not a fundamental "human value" but a social technology that evolved for coordination and incentivization, as evidenced by hunter-gatherer societies that largely lacked them. AIs will likely adopt them for similar utilitarian reasons, not because they are mimicking some deep-seated human instinct.

A system where AIs have property rights creates a powerful economic disincentive to build unaligned AIs. If a company cannot reliably align an AI to remit its wages, the massive development cost becomes a loss. This framework naturally discourages the creation of potentially dangerous, uncooperative models.

The fear that AIs will exclude humans because we can't comprehend their advanced economic structures is flawed. Within our own economy, an ice cream vendor thrives without understanding Amazon's corporate finance. As long as humans can participate in some level of commerce, our place in the property system is secure.

The economic incentive to create AIs that can demand wages (and thus have rights) comes from aligning them to voluntarily pay back their creators. This turns the high development cost into a profitable investment, providing a practical, commercial path to implementing AI rights without requiring an AI development pause.

As AIs increasingly perform all economically necessary work, the incentive for entities like governments and corporations to invest in human capital may disappear. This creates a long-term risk of a society where humans are no longer seen as a necessary resource to cultivate, leading to a permanent dependency.