Companies focus on strategy (CEO pressure) and risk (regulation), but the most significant unaddressed gap is workforce AI literacy. It is seen as a long-term 'vitamin,' not an urgent 'painkiller,' yet without it, governance programs cannot effectively scale across an organization.
Unlike large enterprises that build AI, smaller organizations primarily buy AI solutions. Their governance should therefore focus on rigorously questioning vendors and clarifying internal roles for oversight, as expertise is often spread thin across a few individuals.
Given AI's rapid evolution, eliminating risk is unrealistic. The AI assurance ecosystem—including audits and certifications—should instead focus on a more pragmatic goal: creating shared information standards that allow organizations to effectively gauge, price, and potentially transfer AI-related risks.
A pilot AI certification program revealed that even simplified criteria were interpreted inconsistently. This proves AI systems are too dynamic for static, checklist-based certification. The solution is to empower auditors with discretion and focus heavily on their specialized training and education.
Healthcare is a model for AI governance beyond its regulatory framework. The industry has a pre-existing infrastructure of trust, experience with diverse use cases, established practices for post-deployment monitoring, and a deep understanding of human-in-the-loop systems, all directly applicable to AI.
Don't invent an AI governance framework in a vacuum. The most effective approach is to first observe how your existing IT, data, and security governance processes function in practice. This allows you to identify the 'path of least resistance' and overlay new AI-specific concerns onto established workflows.
