Esper's executive team preemptively created a cross-functional AI policy, appointing a coordinator while mandating that each functional leader develop their own strategy. This prevented rogue AI use and ensured a cohesive, company-wide approach instead of isolated efforts.
Esper established a clear policy for employees to pilot new AI tools. They can experiment without ingesting proprietary data, then submit promising tools to an IT and security-led committee that promises a quick decision. This approach balances fostering innovation with maintaining security.
To ensure governance and avoid redundancy, Experian centralizes AI development. This approach treats AI as a core platform capability, allowing for the reuse of models and consistent application of standards across its global operations.
An effective AI strategy pairs a central task force for enablement—handling approvals, compliance, and awareness—with empowerment of frontline staff. The best, most elegant applications of AI will be identified by those doing the day-to-day work.
Effective AI adoption requires a three-part structure. 'Leadership' sets the vision and incentives. The 'Crowd' (all employees) experiments with AI tools in their own workflows. The 'Lab' (a dedicated internal team, not just IT) refines and scales the best ideas that emerge from the crowd.
Instead of reacting to unsanctioned tool usage, forward-thinking organizations create formal AI councils. These cross-functional groups (risk, privacy, IT, business lines) establish a proactive process for dialogue and evaluation, addressing governance issues before tools become deeply embedded.
Employees often use personal AI accounts ("secret AI") because they're unsure of company policy. The most effective way to combat this is a central document detailing approved tools, data policies, and access instructions. This "golden path" removes ambiguity and empowers safe, rapid experimentation.
Rather than allowing siloed AI experiments, Boehringer Ingelheim uses a centralized "AI innovation team." This overarching function supports the entire enterprise, pilots ideas to "fail fast or scale up," ensures compliance, and builds economies of scale.
To avoid chaos in AI exploration, assign roles. Designate one person as the "pilot" to actively drive new tools for a set period. Others act as "passengers"—they are engaged and informed but follow the pilot's lead. This focuses team energy and prevents conflicting efforts.
Effective AI integration isn't just a leadership directive or a grassroots movement; it requires both. Leadership must set the vision and signal AI's importance, while the organization must empower natural early adopters to experiment, share learnings, and pave the way for others.
CEOs who merely issue an "adopt AI" mandate and delegate it down the hierarchy set teams up for failure. Leaders must actively participate in hackathons and create "play space" for experimentation to demystify AI and drive genuine adoption from the top down, avoiding what's called the "delegation trap."