A shocking 29% of employees, including 44% of Gen Z, admit to sabotaging their company's AI strategy. This resistance, driven by a lack of trust and leadership, is seen by 76% of executives as a serious threat. It manifests in active disruption and risky behaviors like entering sensitive data into public AI tools.
The workforce is bifurcating into AI super-users and laggards. 92% of C-suite executives are actively cultivating a new class of elite employees, who are 3x more likely to receive promotions and raises. Concurrently, 60% of these leaders plan to lay off employees who cannot or will not use AI, creating a two-tiered system.
In a powerful signal of internal optimism, Anthropic's employee stock tender offer failed to reach its full allocation. Mirroring a similar trend at OpenAI, employees are holding onto their shares—even those valued at a $380B valuation—reflecting a strong belief that the company's value will skyrocket leading up to an IPO.
Contrary to fears that AI would replace security firms, the consensus has shifted. Analysts now believe AI massively increases the surface area for vulnerabilities, compounding the need for security. This creates a multi-billion dollar opportunity for firms protecting new AI-driven attack vectors, making cyber a resilient software sector.
Enterprise AI's biggest hurdle is a leadership crisis, not a technical one. Data reveals a massive disconnect: 61% of executives trust AI for critical decisions, while only 9% of workers do. This chasm erodes trust in managers (75% of employees trust AI more) and causes expensive initiatives to fail.
Industries historically slow to adopt software are now rapidly embracing AI. Unlike rigid workflow tools, AI excels at parsing dense text and augmenting the nuanced, unstructured work common in these fields. This allows new AI vendors to gain traction without needing to rip-and-replace legacy systems of record like EHRs.
