Blue Jay's initial supervised learning models for specific tax questions led to inconsistent usage and churn. Users left when their niche problems weren't covered. Pivoting to a generative AI approach (RAG with LLMs) allowed them to answer *any* question, finally achieving strong product-market fit and solving their core retention issue.
Cues' initial product was a specialized AI design agent. However, they observed that users were more frequently uploading files to use it as a knowledge base. Recognizing this emergent behavior, they pivoted to a more horizontal product, which was key to their rapid growth and product-market fit.
Recognizing there is no single "best" LLM, AlphaSense built a system to test and deploy various models for different tasks. This allows them to optimize for performance and even stylistic preferences, using different models for their buy-side finance clients versus their corporate users.
The turning point came when a simple OpenAI API call solved a customer's problem more effectively than their complex, slow data science script. This stark contrast revealed the massive opportunity in leveraging modern AI and triggered their pivot.
For specialized, high-stakes tasks like insurance underwriting, enterprises will favor smaller, on-prem models fine-tuned on proprietary data. These models can be faster, more accurate, and more secure than general-purpose frontier models, creating a lasting market for custom AI solutions.
According to IBM's AI Platform VP, Retrieval-Augmented Generation (RAG) was the killer app for enterprises in the first year after ChatGPT's release. RAG allows companies to connect LLMs to their proprietary structured and unstructured data, unlocking immense value from existing knowledge bases and proving to be the most powerful initial methodology.
Most successful SaaS companies weren't built on new core tech, but by packaging existing tech (like databases or CRMs) into solutions for specific industries. AI is no different. The opportunity lies in unbundling a general tool like ChatGPT and rebundling its capabilities into vertical-specific products.
An LLM analyzes sales call transcripts to generate a 1-10 sentiment score. This score, when benchmarked against historical data, became a highly predictive leading indicator for both customer churn and potential upsells. It replaces subjective rep feedback with a consistent, data-driven early warning system.
Pega's CTO advises using the powerful reasoning of LLMs to design processes and marketing offers. However, at runtime, switch to faster, cheaper, and more consistent predictive models. This avoids the unpredictability, cost, and risk of calling expensive LLMs for every live customer interaction.
Initially building a tool for ML teams, they discovered the true pain point was creating AI-powered workflows for business users. This insight came from observing how first customers struggled with the infrastructure *around* their tool, not the tool itself.
Companies with messy data should focus on generative AI tasks like content creation for immediate value. Predictive AI projects, such as churn forecasting, require extensive data cleaning and expertise, making them slow and complex. Generative tools offer quick efficiency gains with minimal setup, providing a faster path to ROI.