We scan new podcasts and send you the top 5 insights daily.
To accelerate organizational learning in AI, incentivize the sharing of failures. A Fortune 500 company gives employees redeemable points for sharing use cases, but offers *extra points* for detailing a failed experiment and the resulting lesson. This normalizes failure and prevents others from repeating the same mistakes.
When developing internal AI tools, adopt a 'fail fast' mantra. Many use cases fail not because the idea is bad, but because the underlying models aren't yet capable. It's critical to regularly revisit these failed projects, as rapid advancements in AI can quickly make a previously unfeasible idea viable.
Mandating AI usage can backfire by creating a threat. A better approach is to create "safe spaces" for exploration. Atlassian runs "AI builders weeks," blocking off synchronous time for cross-functional teams to tinker together. The celebrated outcome is learning, not a finished product, which removes pressure and encourages genuine experimentation.
Many employees secretly use AI for huge efficiency gains. To harness this, leaders must create programs that reward sharing these methods, rather than making workers fear punishment or layoffs. This allows innovative, bottom-up AI usage to be scaled across the organization.
To prevent a culture of blame, Sierra holds public "lessons learned" sessions for any failure, from lost deals to bugs. This frames failure as a collective responsibility of the team, not an individual's fault. The focus is on fixing the underlying system, fostering paranoia about processes, not people.
To combat the natural reluctance to admit failure and to foster decisiveness, some innovative companies offer bonuses to employees who kill their own underperforming projects. This practice creates a culture of honesty and overcomes the personal attachment that often keeps bad ideas alive far too long.
Jensen Huang rejects "praise publicly, criticize privately." He criticizes publicly so the entire organization can learn from one person's mistake, optimizing for company-wide learning over individual comfort and avoiding political infighting.
Employees achieving massive AI-driven productivity gains ('secret cyborgs') often hide their methods, fearing punishment or layoffs. To scale these innovations, leadership must create explicit incentive structures that reward them for sharing their methods, turning individual hacks into organizational advantages.
Instead of stigmatizing failure, LEGO embeds a formal "After Action Review" (AAR) process into its culture, with reviews happening daily at some level. This structured debrief forces teams to analyze why a project failed and apply those specific learnings across the organization to prevent repeat mistakes.
To foster psychological safety for innovation, leaders must publicly celebrate the effort and learning from failed projects, not just successful outcomes. Putting a team on a pedestal for a six-month project that didn't ship sends a stronger signal than any monetary award.
Stalled AI projects often stem from cultural issues. Leaders rush for big wins instead of adopting an experimental "build to learn" mindset. They fail to address poor data quality and the organizational fear that leads to automating old processes instead of innovating new ones.