Social media feeds should be viewed as the first mainstream AI agents. They operate with a degree of autonomy to make decisions on our behalf, shaping our attention and daily lives in ways that often misalign with our own intentions. This serves as a cautionary tale for the future of more powerful AI agents.
The common feeling of needing to 'detox' from a phone or computer is a sign of a broken user relationship. Unlike a sofa, we can't simply replace it. This aversion stems from devices being filled with applications whose incentives are not aligned with our well-being, a problem AI will amplify.
A new 'common source' model is proposed to solve the incentive problem between open and closed-source software. This hybrid approach would allow users to modify the software to fit their needs (like open source) while still enabling creators to monetize their work, preventing exploitation by large enterprises.
Instead of being stuck with rigid software, a future powered by decentralized AI could allow users to modify their tools directly. For example, a doctor frustrated with an electronic medical record system could use natural language to instantly change the software to fit their workflow, reclaiming control over their digital environment.
When companies use black-box AI for hiring, it creates a no-win 'arms race.' Applicants use prompt injection and other tricks to game the system, while companies build countermeasures to detect them. This escalatory cycle is a 'war of attrition' where the underlying goal of finding the right candidate is lost.
Technological advancement, particularly in AI, moves faster than legal and social frameworks can adapt. This creates 'lawless spaces,' akin to the Wild West, where powerful new capabilities exist without clear rules or recourse for those negatively affected. This leaves individuals vulnerable to algorithmic decisions about jobs, loans, and more.
OpenAI's platform strategy, which centralizes app distribution through ChatGPT, mirrors Apple's iOS model. This creates a 'walled garden' that could follow Cory Doctorow's 'inshittification' pattern: initially benefiting users, then locking them in, and finally exploiting them once they cannot easily leave the ecosystem.
