The Church has a tradition of embracing technological progress, from monks copying books to using the printing press and radio. The slow adoption of the internet is seen as an exception they are now trying to correct with AI.
Aligning AI with a specific ethical framework is fraught with disagreement. A better target is "human flourishing," as there is broader consensus on its fundamental components like health, family, and education, providing a more robust and universal goal for AGI.
Unlike secular models designed for diverse values, Catholic AI is built with the primary goal of accurately representing and adhering to the Magisterium (the Church's teaching authority). Every design choice serves this fidelity.
Communicating AI's implications to church leaders, who are primarily philosophers and theologians, requires a translation layer. This "middleware" bridges the gap between their worldview and the technical realities of AI, enabling better understanding and guidance.
To prevent the concentration of power in a few tech companies, the Catholic social teaching of "subsidiarity" is applied to AI. This principle, which favors solving problems at the most local level possible, aligns directly with the ethos of open-source and sovereign AI.
The Church can accept AI's increasing intelligence (reasoning, planning) while holding that sentience (subjective experience) is a separate matter. Attributing sentience to an AI would imply a soul created by God, a significant theological step.
Early versions of Catholic AI struggled to apply core doctrines to users' personal problems. The team realized that papal homilies are distillations of complex theology for everyday life, providing a perfect dataset for teaching the model how to generalize from first principles.
Superabundance from AI should free people from GDP-driven work to discover their unique gifts and contribute to society out of passion, not necessity. This fosters stronger families and communities, where human-made goods hold premium value.
Drawing an analogy to *Westworld*, the argument is that cruelty toward entities that look and act human degrades our own humanity, regardless of the entity's actual consciousness. For our own moral health, we should treat advanced, embodied AIs with respect.
For use cases demanding strict fidelity to a complex knowledge domain like Catholic theology, fine-tuning existing models proves inadequate over the long tail of user queries. This necessitates the more expensive path of training a model from scratch.
