A 'GenAI solves everything' mindset is flawed. High-latency models are unsuitable for real-time operational needs, like optimizing a warehouse worker's scanning path, which requires millisecond responses. The key is to apply the right tool—be it an optimizer, machine learning, or GenAI—to the specific business problem.

Related Insights

While AI can attempt complex, hour-long tasks with 50% success, its reliability plummets for longer operations. For mission-critical enterprise use requiring 99.9% success, current AI can only reliably complete tasks taking about three seconds. This necessitates breaking large problems into many small, reliable micro-tasks.

Recognizing there is no single "best" LLM, AlphaSense built a system to test and deploy various models for different tasks. This allows them to optimize for performance and even stylistic preferences, using different models for their buy-side finance clients versus their corporate users.

Many leaders mistakenly halt AI adoption while waiting for perfect data governance. This is a strategic error. Organizations should immediately identify and implement the hundreds of high-value generative AI use cases that require no access to proprietary data, creating immediate wins while larger data initiatives continue.

High productivity isn't about using AI for everything. It's a disciplined workflow: breaking a task into sub-problems, using an LLM for high-leverage parts like scaffolding and tests, and reserving human focus for the core implementation. This avoids the sunk cost of forcing AI on unsuitable tasks.

Users mistakenly evaluate AI tools based on the quality of the first output. However, since 90% of the work is iterative, the superior tool is the one that handles a high volume of refinement prompts most effectively, not the one with the best initial result.

A critical error in AI integration is automating existing, often clunky, processes. Instead, companies should use AI as an opportunity to fundamentally rethink and redesign workflows from the ground up to achieve the desired outcome in a more efficient and customer-centric way.

Building a single, all-purpose AI is like hiring one person for every company role. To maximize accuracy and creativity, build multiple custom GPTs, each trained for a specific function like copywriting or operations, and have them collaborate.

Pega's CTO advises using the powerful reasoning of LLMs to design processes and marketing offers. However, at runtime, switch to faster, cheaper, and more consistent predictive models. This avoids the unpredictability, cost, and risk of calling expensive LLMs for every live customer interaction.

To maximize AI's impact, don't just find isolated use cases for content or demand gen teams. Instead, map a core process like a campaign workflow and apply AI to augment each stage, from strategy and creation to localization and measurement. AI is workflow-native, not function-native.

Go beyond using AI for simple efficiency gains. Engage with advanced reasoning models as if they were expert business consultants. Ask them deep, strategic questions to fundamentally innovate and reimagine your business, not just incrementally optimize current operations.