Housing AI strategy within IT is a critical error. The most valuable applications of AI are not technological but rather business innovations. The conversation must be led by business leaders asking what is now possible for customers and partners, with IT acting as an enabler, not the primary owner.
Successful AI pilots find a 'sweet spot.' They solve a problem large enough to be seen as representative of a broader organizational challenge, ensuring learnings are scalable. Yet, they are small enough to deliver value quickly, maintaining momentum and avoiding organizational fatigue.
AI isn't a technology to be applied to existing processes. It's a foundational layer, like an operating system, that fundamentally reshapes how businesses create value, make decisions, and operate. This perspective forces a complete rethink of strategy, not just an upgrade.
The most common failure in AI strategy is adhering to a linear, sequential planning process where each department creates its own strategy in isolation. AI's power lies in connecting disparate data sets across functions, which a siloed, 'baton-passing' approach inherently prevents.
Simply publishing ethical AI principles is insufficient. True ethical implementation requires grounding those principles in concrete technology choices—like sandboxing tools to prevent data leaks, choosing models based on training transparency, and enforcing data sovereignty rules. Ethics must be systemic, not just declarative.
Don't rely on traditional project milestones to gauge AI progress. Instead, measure success through granular unit economics and operational metrics. Metrics like 'cost per release' or 'cycle time per feature' provide immediate feedback on whether your strategic hypothesis is valid, enabling rapid iteration.
