To prevent the concentration of power in a few tech companies, the Catholic social teaching of "subsidiarity" is applied to AI. This principle, which favors solving problems at the most local level possible, aligns directly with the ethos of open-source and sovereign AI.

Related Insights

New technologies perceived as job-destroying, like AI, face significant public and regulatory risk. A powerful defense is to make the general public owners of the technology. When people have a financial stake in a technology's success, they are far more likely to defend it than fight against it.

The current trend toward closed, proprietary AI systems is a misguided and ultimately ineffective strategy. Ideas and talent circulate regardless of corporate walls. True, defensible innovation is fostered by openness and the rapid exchange of research, not by secrecy.

In its largest user study, OpenAI's research team frames AI not just as a product but as a fundamental utility, stating its belief that "access to AI should be treated as a basic right." This perspective signals a long-term ambition for AI to become as integral to society as electricity or internet access.

The most profound innovations in history, like vaccines, PCs, and air travel, distributed value broadly to society rather than being captured by a few corporations. AI could follow this pattern, benefiting the public more than a handful of tech giants, especially with geopolitical pressures forcing commoditization.

The key to successful open-source AI isn't uniting everyone into a massive project. Instead, EleutherAI's model proves more effective: creating small, siloed teams with guaranteed compute and end-to-end funding for a single, specific research problem. This avoids organizational overhead and ensures completion.

The choice between open and closed-source AI is not just technical but strategic. For startups, feeding proprietary data to a closed-source provider like OpenAI, which competes across many verticals, creates long-term risk. Open-source models offer "strategic autonomy" and prevent dependency on a potential future rival.

Early versions of Catholic AI struggled to apply core doctrines to users' personal problems. The team realized that papal homilies are distillations of complex theology for everyday life, providing a perfect dataset for teaching the model how to generalize from first principles.

Unlike secular models designed for diverse values, Catholic AI is built with the primary goal of accurately representing and adhering to the Magisterium (the Church's teaching authority). Every design choice serves this fidelity.

While making powerful AI open-source creates risks from rogue actors, it is preferable to centralized control by a single entity. Widespread access acts as a deterrent based on mutually assured destruction, preventing any one group from using AI as a tool for absolute power.

With pronouncements on AI's impact on human dignity, Pope Leo XIV is framing the technology as a critical religious and ethical issue. This matters because the Pope influences the beliefs of 1.4 billion Catholics worldwide, making the Vatican a powerful force in the societal debate over AI's trajectory and regulation.