Security leaders don't wait for government mandates; they adopt market-driven standards like SOC 2 to protect their business and customers. AI governance is following a similar path, with companies establishing robust practices out of necessity, not just for compliance.
Many companies have formed AI governance committees, but these groups lack the deep technical expertise to ask probing questions. They tend to accept superficial answers from vendors, creating a false sense of security and failing to mitigate real risks.
The concept of "human-in-the-loop" is often misapplied. To effectively manage autonomous AI agents, companies must map the agent's entire workflow and insert mandatory human approval at critical decision points, not just as a final check or initial hand-off.
Large enterprises often have secure, licensed AI tools. Mid-market employees, lacking these resources, are more likely to use free consumer-grade AI, inadvertently feeding it proprietary company data and creating significant security vulnerabilities.
Formal auditing for AI systems is nascent. Only a small fraction (<5%) of clients currently demand checks on AI accuracy. It will likely take 6-12 months for this demand to reach a critical mass that compels auditors to broadly incorporate AI-specific testing.
To avoid being overwhelmed by AI risk, enterprises should categorize threats into four distinct buckets: 1) AI in your product, 2) internal employee use, 3) AI in vendor tools, and 4) malicious use by bad actors. This framework allows for targeted, practical solutions for each category.
