Employees often use personal AI accounts ("secret AI") because they're unsure of company policy. The most effective way to combat this is a central document detailing approved tools, data policies, and access instructions. This "golden path" removes ambiguity and empowers safe, rapid experimentation.
Business leaders often assume their teams are independently adopting AI. In reality, employees are hesitant to admit they don't know how to use it effectively and are waiting for formal training and a clear strategy. The responsibility falls on leadership to initiate AI education.
Mandating AI usage can backfire by creating a threat. A better approach is to create "safe spaces" for exploration. Atlassian runs "AI builders weeks," blocking off synchronous time for cross-functional teams to tinker together. The celebrated outcome is learning, not a finished product, which removes pressure and encourages genuine experimentation.
An effective AI strategy pairs a central task force for enablement—handling approvals, compliance, and awareness—with empowerment of frontline staff. The best, most elegant applications of AI will be identified by those doing the day-to-day work.
The primary focus for leaders should be fostering a culture of safe, ethical, and collaborative AI use. This involves mandatory training and creating shared learning spaces, like Slack channels for prompt sharing, rather than just focusing on tool procurement.
Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.
To avoid chaos in AI exploration, assign roles. Designate one person as the "pilot" to actively drive new tools for a set period. Others act as "passengers"—they are engaged and informed but follow the pilot's lead. This focuses team energy and prevents conflicting efforts.
Companies with an "open by default" information culture, where documents are accessible unless explicitly restricted, have a significant head start in deploying effective AI. This transparency provides a rich, interconnected knowledge base that AI agents can leverage immediately, unlike in siloed organizations where information access is a major bottleneck.
For companies given a broad "AI mandate," the most tactical and immediate starting point is to create a private, internalized version of a large language model like ChatGPT. This provides a quick win by enabling employees to leverage generative AI for productivity without exposing sensitive intellectual property or code to public models.
Employees hesitate to use new AI tools for fear of looking foolish or getting fired for misuse. Successful adoption depends less on training courses and more on creating a safe environment with clear guardrails that encourages experimentation without penalty.
Effective AI policies focus on establishing principles for human conduct rather than just creating technical guardrails. The central question isn't what the tool can do, but how humans should responsibly use it to benefit employees, customers, and the community.