Companies with an "open by default" information culture, where documents are accessible unless explicitly restricted, have a significant head start in deploying effective AI. This transparency provides a rich, interconnected knowledge base that AI agents can leverage immediately, unlike in siloed organizations where information access is a major bottleneck.
For enterprise AI, standard RAG struggles with granular permissions and relationship-based questions. Atlassian's "teamwork graph" maps entities like teams, tasks, and documents. This allows it to answer complex queries like "What did my team do last week?"—a task where simple vector search would fail by just returning top documents.
Mandating AI usage can backfire by creating a threat. A better approach is to create "safe spaces" for exploration. Atlassian runs "AI builders weeks," blocking off synchronous time for cross-functional teams to tinker together. The celebrated outcome is learning, not a finished product, which removes pressure and encourages genuine experimentation.
The primary focus for leaders should be fostering a culture of safe, ethical, and collaborative AI use. This involves mandatory training and creating shared learning spaces, like Slack channels for prompt sharing, rather than just focusing on tool procurement.
Moving PRDs and other product artifacts from Confluence or Notion directly into the codebase's repository gives AI coding assistants persistent, local context. This adjacency means the AI doesn't need external tool access (like an MCP) to understand the 'why' behind the code, leading to better suggestions and iterations.
Companies can build authority and community by transparently sharing the specific third-party AI agents and tools they use for core operations. This "open source" approach to the operational stack serves as a high-value, practical playbook for others in the ecosystem, building trust.
The current trend toward closed, proprietary AI systems is a misguided and ultimately ineffective strategy. Ideas and talent circulate regardless of corporate walls. True, defensible innovation is fostered by openness and the rapid exchange of research, not by secrecy.
To avoid generic, creatively lazy AI output ("slop"), Atlassian's Sharif Mansour injects three key ingredients: the team's unique "taste" (style/opinion), specific organizational "knowledge" (data and context), and structured "workflow" (deployment in a process). This moves beyond simple prompting to create differentiated results.
With AI, codebases become queryable knowledge bases for everyone, not just engineers. Granting broad, read-only access to systems like GitHub from day one allows new hires in any role (product, design, data) to use AI to get context and onboard dramatically faster.
The ultimate value of AI will be its ability to act as a long-term corporate memory. By feeding it historical data—ICPs, past experiments, key decisions, and customer feedback—companies can create a queryable "brain" that dramatically accelerates onboarding and institutional knowledge transfer.
Atlassian's AI onboarding agent, Nora, answers new hires' logistical questions, reducing their reluctance to bother managers. More strategically, this initial, low-stakes interaction serves as an effective on-ramp, conditioning employees from day one to view AI as a standard collaborative tool for their core work.