The concentration of AI power in a few tech giants is a market choice, not a technological inevitability. Publicly funded, non-profit-motivated models, like one from Switzerland's ETH Zurich, prove that competitive and ethically-trained AI can be created without corporate control or the profit motive.
While tech giants could technically replicate Perplexity, their core business models—advertising for Google, e-commerce for Amazon—create a fundamental conflict of interest. An independent player can align purely with the user's best interests, creating a strategic opening that incumbents are structurally unable to fill without cannibalizing their primary revenue streams.
Rather than government regulation, market forces will address AI bias. As studies reveal biases in models from OpenAI and Google, competitors like Elon Musk's Grok can market their model's neutrality as a key selling point, attracting users and forcing the entire market to improve.
The AI industry faces a major perception problem, fueled by fears of job loss and wealth inequality. To build public trust, tech companies should emulate Gilded Age industrialists like Andrew Carnegie by using their vast cash reserves to fund tangible public benefits, creating a social dividend.
Elon Musk founded OpenAI as a nonprofit to be the philosophical opposite of Google, which he believed had a monopoly on AI and a CEO who wasn't taking AI safety seriously. The goal was to create an open-source counterweight, not a for-profit entity.
The emergence of high-quality open-source models from China drastically shortens the innovation window of closed-source leaders. This competition is healthy for startups, providing them with a broader array of cheaper, powerful models to build on and preventing a single company from becoming a chokepoint.
The current trend toward closed, proprietary AI systems is a misguided and ultimately ineffective strategy. Ideas and talent circulate regardless of corporate walls. True, defensible innovation is fostered by openness and the rapid exchange of research, not by secrecy.
To prevent the concentration of power in a few tech companies, the Catholic social teaching of "subsidiarity" is applied to AI. This principle, which favors solving problems at the most local level possible, aligns directly with the ethos of open-source and sovereign AI.
The most profound innovations in history, like vaccines, PCs, and air travel, distributed value broadly to society rather than being captured by a few corporations. AI could follow this pattern, benefiting the public more than a handful of tech giants, especially with geopolitical pressures forcing commoditization.
To avoid a future where a few companies control AI and hold society hostage, the underlying intelligence layer must be commoditized. This prevents "landlords" of proprietary models from extracting rent and ensures broader access and competition.
The idea that one company will achieve AGI and dominate is challenged by current trends. The proliferation of powerful, specialized open-source models from global players suggests a future where AI technology is diverse and dispersed, not hoarded by a single entity.