Esper established a clear policy for employees to pilot new AI tools. They can experiment without ingesting proprietary data, then submit promising tools to an IT and security-led committee that promises a quick decision. This approach balances fostering innovation with maintaining security.
Mandating AI usage can backfire by creating a threat. A better approach is to create "safe spaces" for exploration. Atlassian runs "AI builders weeks," blocking off synchronous time for cross-functional teams to tinker together. The celebrated outcome is learning, not a finished product, which removes pressure and encourages genuine experimentation.
The biggest hurdle for enterprise AI adoption is uncertainty. A dedicated "lab" environment allows brands to experiment safely with partners like Microsoft. This lets them pressure-test AI applications, fine-tune models on their data, and build confidence before deploying at scale, addressing fears of losing control over data and brand voice.
Effective AI adoption requires a three-part structure. 'Leadership' sets the vision and incentives. The 'Crowd' (all employees) experiments with AI tools in their own workflows. The 'Lab' (a dedicated internal team, not just IT) refines and scales the best ideas that emerge from the crowd.
Instead of reacting to unsanctioned tool usage, forward-thinking organizations create formal AI councils. These cross-functional groups (risk, privacy, IT, business lines) establish a proactive process for dialogue and evaluation, addressing governance issues before tools become deeply embedded.
AI agent platforms are typically priced by usage, not seats, making initial costs low. Instead of a top-down mandate for one tool, leaders should encourage teams to expense and experiment with several options. The best solution for the team will emerge organically through use.
Snowflake established a cross-functional AI council with volunteers who dedicate 10-20% of their time to experimentation. This avoids chaotic, duplicated efforts from a company-wide mandate. The council then shares learnings and rolls out proven use cases to the broader team quarterly, ensuring structured adoption.
Employees often use personal AI accounts ("secret AI") because they're unsure of company policy. The most effective way to combat this is a central document detailing approved tools, data policies, and access instructions. This "golden path" removes ambiguity and empowers safe, rapid experimentation.
Employees hesitate to use new AI tools for fear of looking foolish or getting fired for misuse. Successful adoption depends less on training courses and more on creating a safe environment with clear guardrails that encourages experimentation without penalty.
To balance security with agility, enterprises should run two AI tracks. Let the CIO's office develop secure, custom models for sensitive data while simultaneously empowering business units like marketing to use approved, low-risk SaaS AI tools to maintain momentum and drive immediate value.
Esper's executive team preemptively created a cross-functional AI policy, appointing a coordinator while mandating that each functional leader develop their own strategy. This prevented rogue AI use and ensured a cohesive, company-wide approach instead of isolated efforts.