Instead of reacting to unsanctioned tool usage, forward-thinking organizations create formal AI councils. These cross-functional groups (risk, privacy, IT, business lines) establish a proactive process for dialogue and evaluation, addressing governance issues before tools become deeply embedded.

Related Insights

When creating AI governance, differentiate based on risk. High-risk actions, like uploading sensitive company data into a public model, require rigid, enforceable "policies." Lower-risk, judgment-based areas, like when to disclose AI use in an email, are better suited for flexible "guidelines" that allow for autonomy.

An effective AI strategy pairs a central task force for enablement—handling approvals, compliance, and awareness—with empowerment of frontline staff. The best, most elegant applications of AI will be identified by those doing the day-to-day work.

Employees often use personal AI accounts ("secret AI") because they're unsure of company policy. The most effective way to combat this is a central document detailing approved tools, data policies, and access instructions. This "golden path" removes ambiguity and empowers safe, rapid experimentation.

Treating AI risk management as a final step before launch leads to failure and loss of customer trust. Instead, it must be an integrated, continuous process throughout the entire AI development pipeline, from conception to deployment and iteration, to be effective.

Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.

When marketing teams adopt unsanctioned AI tools, it's typically not intentional subversion but an attempt to achieve business outcomes under pressure. IT leaders should interpret this "shadow IT" as a signal of urgent business needs, opening a dialogue about enabling innovation with proper guardrails.

For enterprises, scaling AI content without built-in governance is reckless. Rather than manual policing, guardrails like brand rules, compliance checks, and audit trails must be integrated from the start. The principle is "AI drafts, people approve," ensuring speed without sacrificing safety.

Treating AI as a technology initiative delegated to IT is a critical error. Given its transformative impact on competitive advantage, risk, and governance, AI strategy must be owned and overseen by the board of directors. Board ignorance of AI initiatives creates significant, potentially company-ending, corporate risk.

To balance security with agility, enterprises should run two AI tracks. Let the CIO's office develop secure, custom models for sensitive data while simultaneously empowering business units like marketing to use approved, low-risk SaaS AI tools to maintain momentum and drive immediate value.

Synthesia views robust AI governance not as a cost but as a business accelerator. Early investments in security and privacy build the trust necessary to sell into large enterprises like the Fortune 500, who prioritize brand safety and risk mitigation over speed.