For use cases demanding strict fidelity to a complex knowledge domain like Catholic theology, fine-tuning existing models proves inadequate over the long tail of user queries. This necessitates the more expensive path of training a model from scratch.
Early AI training involved simple preference tasks. Now, training frontier models requires PhDs and top professionals to perform complex, hours-long tasks like building entire websites or explaining nuanced cancer topics. The demand is for deep, specialized expertise, not just generalist labor.
For specialized, high-stakes tasks like insurance underwriting, enterprises will favor smaller, on-prem models fine-tuned on proprietary data. These models can be faster, more accurate, and more secure than general-purpose frontier models, creating a lasting market for custom AI solutions.
The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.
OpenAI favors "zero gradient" prompt optimization because serving thousands of unique, fine-tuned model snapshots is operationally very difficult. Prompt-based adjustments allow performance gains without the immense infrastructure burden, making it a more practical and scalable approach for both OpenAI and developers.
Basic supervised fine-tuning (SFT) only adjusts a model's style. The real unlock for enterprises is reinforcement fine-tuning (RFT), which leverages proprietary datasets to create state-of-the-art models for specific, high-value tasks, moving beyond mere 'tone improvements.'
Successful vertical AI applications serve as a critical intermediary between powerful foundation models and specific industries like healthcare or legal. Their core value lies in being a "translation and transformation layer," adapting generic AI capabilities to solve nuanced, industry-specific problems for large enterprises.
Customizing a base model with proprietary data is only effective if a company possesses a massive corpus. At least 10 billion high-quality tokens are needed *after* aggressive deduplication and filtering. This high threshold means the strategy is only viable for the largest corporations, a much higher bar than most businesses realize.
Early versions of Catholic AI struggled to apply core doctrines to users' personal problems. The team realized that papal homilies are distillations of complex theology for everyday life, providing a perfect dataset for teaching the model how to generalize from first principles.
Fine-tuning an AI model is most effective when you use high-signal data. The best source for this is the set of difficult examples where your system consistently fails. The processes of error analysis and evaluation naturally curate this valuable dataset, making fine-tuning a logical and powerful next step after prompt engineering.
The CEO contrasts general-purpose AI with their "courtroom-grade" solution, built on a proprietary, authoritative data set of 160 billion documents. This ensures outputs are grounded in actual case law and verifiable, addressing the core weaknesses of consumer models for professional use.