The metadata file in Pega's Prediction Studio does more than describe a model. It defines the runtime contract, linking model inputs to Pega properties, dictating performance metrics (AUC, F-score), and ensuring correct response tracking. This file is critical for runtime correctness and monitoring, not just for setup.
The most effective integrations use external ML models as specialized scoring components within Pega's broader decisioning framework. The model's score should influence outcomes like prioritization and eligibility, but it should operate alongside, not in place of, existing business rules, eligibility criteria, and contact policies.
Instead of simply swapping a model behind a stable URL, Pega's platform enables a formal release process. Using Prediction Studio's champion/challenger slots and percentage-based rollouts, teams can safely deploy, monitor, and manage new model versions. This MLOps capability turns model updates into a governed, transparent activity.
The choice of cloud provider for hosting external models (e.g., AWS SageMaker vs. Google Vertex AI) has direct consequences for which ML frameworks are supported. For example, Pega's Vertex AI integration supports XGBoost but not TensorFlow or PyTorch, unlike its broader SageMaker support. This is a critical upfront technical consideration.
