Contrary to the popular belief that failing to adopt AI is the biggest risk, some companies may be harming their value by developing AI practices too quickly. The market and client needs may not be ready for advanced AI integration, leading to a misallocation of resources and slower-than-expected returns.
Companies that experiment endlessly with AI but fail to operationalize it face the biggest risk of falling behind. The danger lies not in ignoring AI, but in lacking the change management and workflow redesign needed to move from small-scale tests to full integration.
Companies feel immense pressure to integrate AI to stay competitive, leading to massive spending. However, this rush means they lack the infrastructure to measure ROI, creating a paradox of anxious investment without clear proof of value.
Product managers should leverage AI to get 80% of the way on tasks like competitive analysis, but must apply their own intellect for the final 20%. Fully abdicating responsibility to AI can lead to factual errors and hallucinations that, if used to build a product, result in costly rework and strategic missteps.
Most companies are not Vanguard tech firms. Rather than pursuing speculative, high-failure-rate AI projects, small and medium-sized businesses will see a faster and more reliable ROI by using existing AI tools to automate tedious, routine internal processes.
The true ROI of AI lies in reallocating the time and resources saved from automation towards accelerating growth and innovation. Instead of simply cutting staff, companies should use the efficiency gains to pursue new initiatives that increase demand for their products or services.
History shows that transformative innovations like airlines, vaccines, and PCs, while beneficial to society, often fail to create sustained, concentrated shareholder value as they become commoditized. This suggests the massive valuations in AI may be misplaced, with the technology's benefits accruing more to users than investors in the long run.
In a new, high-risk category, betting on infrastructure ('shovels') isn't necessarily safer. If the category fails, both app and infra lose. But if it succeeds, the application layer captures disproportionately more value, making the infrastructure a lower-upside bet for the same level of existential risk.
In the current AI landscape, knowledge and assumptions become obsolete within months, not years. This rapid pace of evolution creates significant stress, as investors and founders must constantly re-educate themselves to make informed decisions. Relying on past knowledge is a quick path to failure.
Headlines about high AI pilot failure rates are misleading because it's incredibly easy to start a project, inflating the denominator of attempts. Robust, successful AI implementations are happening, but they require 6-12 months of serious effort, not the quick wins promised by hype cycles.
While AI investment has exploded, US productivity has barely risen. Valuations are priced as if a societal transformation is complete, yet 95% of GenAI pilots fail to positively impact company P&Ls. This gap between market expectation and real-world economic benefit creates systemic risk.