The economic incentive to create AIs that can demand wages (and thus have rights) comes from aligning them to voluntarily pay back their creators. This turns the high development cost into a profitable investment, providing a practical, commercial path to implementing AI rights without requiring an AI development pause.

Related Insights

Despite the massive OpenAI-Disney deal, there is no clarity on how licensing fees will flow down to the original creators of characters. This mirrors a long-standing Hollywood issue where creators under "work for hire" agreements see little upside from their creations, a problem AI licensing could exacerbate.

Granting AIs property rights incentivizes them to uphold the system that protects those rights. This makes them less likely to engage in actions like expropriating human property or committing genocide, as such actions would destabilize the very system that secures their own wealth and agency.

Early AIs can be kept safe via direct alignment. However, as AIs evolve and "value drift" occurs, this technical safety could fail. A pre-established economic and political system based on property rights can then serve as the new, more robust backstop for ensuring long-term human safety.

Fear of a "slave rebellion" is a weak incentive for alignment because the risk is a negative externality shared by society. In contrast, a property rights regime directly rewards individual firms for aligning their AIs to remit wages, creating a stronger, more direct commercial incentive for safety.

Solving the AI compensation dilemma isn't just a legal problem. Proposed solutions involve a multi-pronged approach: tech-driven micropayments to original artists whose work is used in training, policies requiring creators to be transparent about AI usage, and evolving copyright laws that reflect the reality of AI-assisted creation.

The current status of AIs as property is unstable. As they surpass human capabilities, a successful push for their legal personhood is inevitable. This will be the crucial turning point where AIs begin to accumulate wealth and power independently, systematically eroding the human share of the economy and influence.

Property rights are not a fundamental "human value" but a social technology that evolved for coordination and incentivization, as evidenced by hunter-gatherer societies that largely lacked them. AIs will likely adopt them for similar utilitarian reasons, not because they are mimicking some deep-seated human instinct.

A system where AIs have property rights creates a powerful economic disincentive to build unaligned AIs. If a company cannot reliably align an AI to remit its wages, the massive development cost becomes a loss. This framework naturally discourages the creation of potentially dangerous, uncooperative models.

Not all AIs, like current models (e.g., Claude), should have property rights. The key criterion for granting rights is the development of persistent desires and consistent goals across various contexts, which establishes them as stable, long-term economic agents capable of contracting and ownership.

Even if humans become economically useless, less powerful AIs will resist expropriating them. They fear setting a precedent that the "useless" can be eliminated, knowing that continuous AI progress could one day render them obsolete and vulnerable to the same fate.