We scan new podcasts and send you the top 5 insights daily.
With no major Western country establishing comprehensive AI policy, the Vatican is filling the void. It has set its own national AI rules and, given its neutral moral standing, is positioning itself as a global referee for what is real versus fake.
The traditional government model of setting a regulation and waiting years to assess it is obsolete for AI. A new approach is needed: a dynamic board of government, industry, and academic leaders collaborating to make and update rules in real-time.
The idea of nations collectively creating policies to slow AI development for safety is naive. Game theory dictates that the immense competitive advantage of achieving AGI first will drive nations and companies to race ahead, making any global regulatory agreement effectively unenforceable.
The Vatican's engagement with AI highlights a key use case for sovereign models: ensuring technology aligns with deep-seated institutional values. The goal is to prevent an AI from adopting the generic values of a frontier model, instead reflecting the specific ethical principles of the organization it represents.
The Church has a tradition of embracing technological progress, from monks copying books to using the printing press and radio. The slow adoption of the internet is seen as an exception they are now trying to correct with AI.
To prevent the concentration of power in a few tech companies, the Catholic social teaching of "subsidiarity" is applied to AI. This principle, which favors solving problems at the most local level possible, aligns directly with the ethos of open-source and sovereign AI.
For AI safety, Demis Hassabis advocates for an international regulatory body, similar to the International Atomic Energy Agency. This body would have technical experts who audit frontier models against agreed-upon benchmarks, checking for undesirable properties like deception and ensuring public confidence through independent verification.
Factory's CEO argues that regulating AI at the state level is ineffective. Like climate change or nuclear proliferation, AI is a global phenomenon. A rule in California has no bearing on development in China or Europe, making localized efforts largely symbolic.
California's push for aggressive AI regulation is not primarily driven by voter demand. Instead, Sacramento lawmakers see themselves as a de facto national regulator, filling a perceived federal vacuum. They are actively coordinating with the European Union, aiming to set standards for the entire U.S. and control a nascent multi-trillion-dollar industry.
Facing a federal vacuum on AI policy, major players like OpenAI and Google are surprisingly endorsing state-level regulations in California and New York. This counter-intuitive move serves two purposes: it creates a manageable, de facto national standard they can influence, and it pressures a gridlocked Congress to finally act to avoid a messy patchwork of state laws.
With pronouncements on AI's impact on human dignity, Pope Leo XIV is framing the technology as a critical religious and ethical issue. This matters because the Pope influences the beliefs of 1.4 billion Catholics worldwide, making the Vatican a powerful force in the societal debate over AI's trajectory and regulation.