Modernizing trials is less about new tools and more about adopting a risk-proportional mindset, as outlined in ICH E6(R3) guidelines. This involves focusing rigorous oversight on critical data and processes while applying lighter, more automated checks elsewhere, breaking the industry's habit of treating all data with the same level of manual scrutiny.
Clinical trial protocols become overly complex because teams copy and paste from previous studies, accumulating unnecessary data points and criteria. Merck advocates for "protocol lean design," which starts from the core research question and rigorously challenges every data collection point to reduce site and patient burden.
In regulated industries, AI's value isn't perfect breach detection but efficiently filtering millions of calls to identify a small, ambiguous subset needing human review. This shifts the goal from flawless accuracy to dramatically improving the efficiency and focus of human compliance officers.
Don't wait for AI to be perfect. The correct strategy is to apply current AI models—which are roughly 60-80% accurate—to business processes where that level of performance is sufficient for a human to then review and bring to 100%. Chasing perfection in-house is a waste of resources given the pace of model improvement.
To introduce AI into a high-risk environment like legal tech, begin with tasks that don't involve sensitive data, such as automating marketing copy. This approach proves AI's value and builds internal trust, paving the way for future, higher-stakes applications like reviewing client documents.
While the FDA is often blamed for high trial costs, a major culprit is the consolidated Clinical Research Organization (CRO) market. These entrenched players lack incentives to adopt modern, cost-saving technologies, creating a structural bottleneck that prevents regulatory modernization from translating into cheaper and faster trials.
Despite a threefold increase in data collection over the last decade, the methods for cleaning and reconciling that data remain antiquated. Teams apply old, manual techniques to massive new datasets, creating major inefficiencies. The solution lies in applying automation and modern technology to data quality control, rather than throwing more people at the problem.
An "AI arms race" is underway where stakeholders apply AI to broken, adversarial processes. The true transformation comes from reinventing these workflows entirely, such as moving to real-time payment adjudication where trust is pre-established, thus eliminating the core conflict that AI is currently used to fight over.
To navigate the high stakes of public sector AI, classify initiatives into low, medium, and high risk. Begin with 'low-hanging fruit' like automating internal backend processes that don't directly face the public. This builds momentum and internal trust before tackling high-risk, citizen-facing applications.
In high-stakes fields like medtech, the "fail fast" startup mantra is irresponsible. The goal should be to "learn fast" instead—maximizing learning cycles internally through research and simulation to de-risk products before they have real-world consequences for patient safety.
Clinical trial sites are increasingly leveraging their power to demand protocol modernization from sponsors. Merck changed its internal processes to allow non-physician sub-investigators only after a site refused to participate without that flexibility. This shows that operational change can be driven from the ground up by partners, not just top-down by sponsors.