India is taking a measured, "no rush" approach to AI governance. The strategy is to first leverage and adapt existing legal frameworks—like the IT Act for deepfakes and data protection laws for privacy—rather than creating new, potentially innovation-stifling AI-specific legislation.
AI video platform Synthesia built its governance on three pillars established at its founding: never creating digital replicas without consent, moderating all content before generation, and collaborating with governments on practical regulation. This proactive framework is core to their enterprise strategy.
Instead of competing to build sovereign AI stacks from the chip up, India's strategic edge is in applying commoditized AI models to its unique, population-scale problems. This leverages the country's deep experience with real-world, large-scale implementation.
While US AI labs debate abstract "constitutions" to define model values, Poland's AI project is preoccupied with a more immediate problem: navigating strict data usage regulations. These legal frameworks act as a de facto set of constraints, making an explicit "Polish AI constitution" a lower priority for now.
To introduce AI into a high-risk environment like legal tech, begin with tasks that don't involve sensitive data, such as automating marketing copy. This approach proves AI's value and builds internal trust, paving the way for future, higher-stakes applications like reviewing client documents.
Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.
Contrary to their current stance, major AI labs will pivot to support national-level regulation. The motivation is strategic: a single, predictable federal framework is preferable to navigating an increasingly complex and contradictory patchwork of state-by-state AI laws, which stifles innovation and increases compliance costs.
India's Ministry of Electronics and IT (Meti) acts as a promoter and facilitator for the AI sector, not a traditional regulator. It uses "policy nudges" and strategic programs like the India AI Mission to coordinate and foster collaboration between private companies, academia, and research organizations.
The UK's strategy of criminalizing specific harmful AI outcomes, like non-consensual deepfakes, is more effective than the EU AI Act's approach of regulating model size and development processes. Focusing on harmful outcomes is a more direct way to mitigate societal damage.
To navigate the high stakes of public sector AI, classify initiatives into low, medium, and high risk. Begin with 'low-hanging fruit' like automating internal backend processes that don't directly face the public. This builds momentum and internal trust before tackling high-risk, citizen-facing applications.
In sectors like finance or healthcare, bypass initial regulatory hurdles by implementing AI on non-sensitive, public information, such as analyzing a company podcast. This builds momentum and demonstrates value while more complex, high-risk applications are vetted by legal and IT teams.