Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Effective AI governance starts with an "AI Council" composed of passionate users, IT, legal, and operations staff. Unlike a top-down "Center of Excellence" that dictates rules, this council's primary role is to create enabling policies and guidelines that empower grassroots adoption and safe experimentation across the organization.

Related Insights

AI is a multidisciplinary challenge, not just a tech or data problem. Assigning governance to a single department creates a 'hot potato' scenario where no one takes full ownership. Success requires a dedicated, cross-functional executive team that genuinely engages with the program's goals on a regular basis.

An effective AI strategy pairs a central task force for enablement—handling approvals, compliance, and awareness—with empowerment of frontline staff. The best, most elegant applications of AI will be identified by those doing the day-to-day work.

The most successful companies deploying AI use a "leadership lab and crowd" model. Leadership provides clear direction, while the entire organization is given access to tools to experiment and discover novel use cases. An internal team then harvests these grassroots ideas for strategic implementation.

To operationalize AI, move beyond a tech-only committee. Sensei created a trifecta of the Chief Human Success Officer, VP of Finance, and CTO. This structure ensures AI initiatives are evaluated based on their impact on people (HR), financial viability (Finance), and technical implementation, creating a holistic roadmap.

Effective AI adoption requires a three-part structure. 'Leadership' sets the vision and incentives. The 'Crowd' (all employees) experiments with AI tools in their own workflows. The 'Lab' (a dedicated internal team, not just IT) refines and scales the best ideas that emerge from the crowd.

Instead of reacting to unsanctioned tool usage, forward-thinking organizations create formal AI councils. These cross-functional groups (risk, privacy, IT, business lines) establish a proactive process for dialogue and evaluation, addressing governance issues before tools become deeply embedded.

The primary focus for leaders should be fostering a culture of safe, ethical, and collaborative AI use. This involves mandatory training and creating shared learning spaces, like Slack channels for prompt sharing, rather than just focusing on tool procurement.

Snowflake established a cross-functional AI council with volunteers who dedicate 10-20% of their time to experimentation. This avoids chaotic, duplicated efforts from a company-wide mandate. The council then shares learnings and rolls out proven use cases to the broader team quarterly, ensuring structured adoption.

Effective AI integration isn't just a leadership directive or a grassroots movement; it requires both. Leadership must set the vision and signal AI's importance, while the organization must empower natural early adopters to experiment, share learnings, and pave the way for others.

Esper's executive team preemptively created a cross-functional AI policy, appointing a coordinator while mandating that each functional leader develop their own strategy. This prevented rogue AI use and ensured a cohesive, company-wide approach instead of isolated efforts.