IBM CEO Arvind Krishna argues Watson's core AI tech was sound, but its failure stemmed from a closed, all-in-one product approach. The market, especially developers, preferred modular building blocks to create their own applications, a lesson that informed the WatsonX rebranding with LLMs.

Related Insights

Customers are hesitant to trust a black-box AI with critical operations. The winning business model is to sell a complete outcome or service, using AI internally for a massive efficiency advantage while keeping humans in the loop for quality and trust.

In the fast-evolving AI space, Vercel's AISDK deliberately remained low-level. CTO Malte Ubl explains that because "we know absolutely nothing" about future AI app patterns, providing a flexible, minimal toolkit was superior to competitors' rigid, high-level frameworks that made incorrect assumptions about user needs.

Simply offering the latest model is no longer a competitive advantage. True value is created in the system built around the model—the system prompts, tools, and overall scaffolding. This 'harness' is what optimizes a model's performance for specific tasks and delivers a superior user experience.

Don't just sprinkle AI features onto your existing product ('AI at the edge'). Transformative companies rethink workflows and shrink their old codebase, making the LLM a core part of the solution. This is about re-architecting the solution from the ground up, not just enhancing it.

Enterprises struggle to get value from AI due to a lack of iterative, data-science expertise. The winning model for AI companies isn't just selling APIs, but embedding "forward deployment" teams of engineers and scientists to co-create solutions, closing the gap between prototype and production value.

Most successful SaaS companies weren't built on new core tech, but by packaging existing tech (like databases or CRMs) into solutions for specific industries. AI is no different. The opportunity lies in unbundling a general tool like ChatGPT and rebundling its capabilities into vertical-specific products.

IBM CEO Arvind Krishna's strategy rests on the conviction that most enterprises will remain hybrid, avoiding lock-in to one public cloud. This creates a durable market for IBM's management software. The second pillar is focusing on deploying trusted AI in regulated industries, ceding the consumer space to others.

Initially, even OpenAI believed a single, ultimate 'model to rule them all' would emerge. This thinking has completely changed to favor a proliferation of specialized models, creating a healthier, less winner-take-all ecosystem where different models serve different needs.

IBM's CEO explains that previous deep learning models were "bespoke and fragile," requiring massive, costly human labeling for single tasks. LLMs are an industrial-scale unlock because they eliminate this labeling step, making them vastly faster and cheaper to tune and deploy across many tasks.

Instead of building a single-purpose application (first-order thinking), successful AI product strategy involves creating platforms that enable users to build their own solutions (second-order thinking). This approach targets a much larger opportunity by empowering users to create custom workflows.