Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Before jumping to GenAI, assess your problem. If you can frame it with clear input columns and a predictable output (a number or category) like in a spreadsheet, a simpler, cheaper, and more reliable traditional Machine Learning model is likely the best choice.

Related Insights

Many AI developers get distracted by the 'LLM hype,' constantly chasing the best-performing model. The real focus should be on solving a specific customer problem. The LLM is a component, not the product, and deterministic code or simpler tools are often better for certain tasks.

Don't use your most powerful and expensive AI model for every task. A crucial skill is model triage: using cheaper models for simple, routine tasks like monitoring and scheduling, while saving premium models for complex reasoning, judgment, and creative work.

AI models are not an immediate threat to Excel because they are designed for approximation, not the precise computation required for financial and data analysis. Their 'black box' nature also contrasts with a spreadsheet's core value proposition: transparent, verifiable calculations that users can trust.

High productivity isn't about using AI for everything. It's a disciplined workflow: breaking a task into sub-problems, using an LLM for high-leverage parts like scaffolding and tests, and reserving human focus for the core implementation. This avoids the sunk cost of forcing AI on unsuitable tasks.

For most enterprise tasks, massive frontier models are overkill—a "bazooka to kill a fly." Smaller, domain-specific models are often more accurate for targeted use cases, significantly cheaper to run, and more secure. They focus on being the "best-in-class employee" for a specific task, not a generalist.

A 'GenAI solves everything' mindset is flawed. High-latency models are unsuitable for real-time operational needs, like optimizing a warehouse worker's scanning path, which requires millisecond responses. The key is to apply the right tool—be it an optimizer, machine learning, or GenAI—to the specific business problem.

To avoid over-engineering, validate an AI chatbot using a simple spreadsheet as its knowledge base. This MVP approach quickly tests user adoption and commercial value. The subsequent pain of manually updating the sheet is the best justification for investing engineering resources into a proper data pipeline.

While GenAI grabs headlines, its most practical enterprise use is as an intelligent orchestrator. It can call upon and synthesize results from highly effective traditional tools like time-series forecasting models or SQL databases, multiplying their value within a larger, more powerful system.

Pega's CTO advises using the powerful reasoning of LLMs to design processes and marketing offers. However, at runtime, switch to faster, cheaper, and more consistent predictive models. This avoids the unpredictability, cost, and risk of calling expensive LLMs for every live customer interaction.

Resist the urge to apply LLMs to every problem. A better approach is using a 'first principles' decision tree. Evaluate if the task can be solved more simply with data visualization or traditional machine learning before defaulting to a complex, probabilistic, and often overkill GenAI solution.