We scan new podcasts and send you the top 5 insights daily.
The most effective integrations use external ML models as specialized scoring components within Pega's broader decisioning framework. The model's score should influence outcomes like prioritization and eligibility, but it should operate alongside, not in place of, existing business rules, eligibility criteria, and contact policies.
Technical metrics like "accuracy" are often the wrong measure for ML projects and can mismanage expectations. To achieve success, projects must be evaluated using business KPIs like profit, savings, or ROI. This aligns data science with business goals and reveals the true value of imperfect predictions.
The metadata file in Pega's Prediction Studio does more than describe a model. It defines the runtime contract, linking model inputs to Pega properties, dictating performance metrics (AUC, F-score), and ensuring correct response tracking. This file is critical for runtime correctness and monitoring, not just for setup.
Don't just set and forget your lead scoring AI. Create a separate, time-based agent that analyzes recent closed-won deals. This "meta-agent" can then identify new success patterns and suggest updates to the primary scoring agent's prompt, ensuring your qualification model evolves with live data.
The term "AI-native" is misleading. A successful platform's foundation is a robust sales workflow and complex data integration, which constitute about 70% of the system. The AI or Large Language Model component is a critical, but smaller, 30% layer on top of that operational core.
Avoid vague, company-wide AI mandates. Instead, apply a maturity framework to individual processes (e.g., account research). This approach builds a practical roadmap, moving specific use cases up the maturity ladder as needed and preventing costly over-engineering.
Don't wait for AI to be perfect. The correct strategy is to apply current AI models—which are roughly 60-80% accurate—to business processes where that level of performance is sufficient for a human to then review and bring to 100%. Chasing perfection in-house is a waste of resources given the pace of model improvement.
Pega's CTO advises using the powerful reasoning of LLMs to design processes and marketing offers. However, at runtime, switch to faster, cheaper, and more consistent predictive models. This avoids the unpredictability, cost, and risk of calling expensive LLMs for every live customer interaction.
The choice of cloud provider for hosting external models (e.g., AWS SageMaker vs. Google Vertex AI) has direct consequences for which ML frameworks are supported. For example, Pega's Vertex AI integration supports XGBoost but not TensorFlow or PyTorch, unlike its broader SageMaker support. This is a critical upfront technical consideration.
The rapid release of new AI models makes it crucial for companies to move beyond industry benchmarks. Developing internal evaluation systems ("evals") is necessary to test and determine which model performs best for unique, high-value business use cases, as model choice is becoming extremely important.
Many companies focus on AI models first, only to hit a wall. An "integration-first" approach is a strategic imperative. Connecting disparate systems *before* building agents ensures they have the necessary data to be effective, avoiding the "garbage in, garbage out" trap at a foundational level.