Companies are 'unanimously ignoring' AI preparedness for their workforce. This paralysis stems from leadership's fear and uncertainty over whether employees are assets to be upskilled for AI-driven success or liabilities to be made redundant by automation.
Lacking standardized metrics for responsible AI, investors are treating corporate transparency as a key proxy for governance maturity. A company's willingness to disclose its AI practices is seen as a direct indicator of its risk management, influencing investment decisions.
A significant gap exists between companies stating an AI strategy (44%) and those with a formal governance framework (13%). This suggests firms prioritize value extraction over establishing ethical guardrails, risking a loss of investor and consumer trust.
In the absence of clear local regulations, over half of global companies, including those outside Europe, cite the EU AI Act as their governance framework. This shows that regulation provides a needed safety net for innovation, rather than stifling it.
The foundation's own use of LLMs to analyze 3,000 disclosures showed that accuracy is highly sensitive to prompt design. Specificity, traceability, and continuous human oversight were essential to avoid misinterpreting varied corporate language and report structures.
The finding that only 1-in-8 companies disclose human oversight policies for AI isn't just a reporting gap. It signals a deeper, structural failure where firms can announce high-level governance concepts but lack the operational infrastructure to implement them day-to-day.
