A COVID-19 trial struggled for patients because its sign-up form had 400 questions; the only person who could edit the PHP file was a grad student. This illustrates how tiny, absurd operational inefficiencies, trapped in silos, can accumulate and severely hinder massive, capital-intensive research projects.

Related Insights

Critical knowledge on how to run clinical trials is not formalized in textbooks or courses but is passed down through a slow apprenticeship model. This limits the spread of best practices and forces even highly educated scientists to "fly blind" when entering the industry, perpetuating inefficiencies.

Clinical trial protocols become overly complex because teams copy and paste from previous studies, accumulating unnecessary data points and criteria. Merck advocates for "protocol lean design," which starts from the core research question and rigorously challenges every data collection point to reduce site and patient burden.

While the UK's world-class universities provide a rich pipeline of scientific talent for biotechs, the country's clinical trial infrastructure is a significant hurdle. Immense pressure on the NHS creates delays in site opening and patient recruitment, creating a fundamental friction point in the biotech value chain.

When a billion-dollar drug trial fails, society learns nothing from the operational process. The detailed documentation of regulatory interactions, manufacturing, and trial design—the "lab notes" of clinical development—is locked away as a trade secret and effectively destroyed, preventing collective industry learning.

The most valuable lessons in clinical trial design come from understanding what went wrong. By analyzing the protocols of failed studies, researchers can identify hidden biases, flawed methodologies, and uncontrolled variables, learning precisely what to avoid in their own work.

A significant portion of biotech's high costs stems from its "artisanal" nature, where each company develops bespoke digital workflows and data structures. This inefficiency arises because startups are often structured for acquisition after a single clinical success, not for long-term, scalable operations.

Despite a threefold increase in data collection over the last decade, the methods for cleaning and reconciling that data remain antiquated. Teams apply old, manual techniques to massive new datasets, creating major inefficiencies. The solution lies in applying automation and modern technology to data quality control, rather than throwing more people at the problem.

The process of testing drugs in humans—clinical development—is a massive, under-studied bottleneck, accounting for 70% of drug development costs. Despite its importance, there is surprisingly little public knowledge, academic research, or even basic documentation on how to improve this crucial stage.

Unlike software startups that can "fail fast" and pivot cheaply, a single biotech clinical program costs tens of millions. This high cost of failure means the industry values experienced founders who have learned from past mistakes, a direct contrast to Silicon Valley's youth-centric culture.

With clinical development cycles lasting 7-10 years, junior team members rarely see a project to completion. Their career incentive becomes pushing a drug to the next stage to demonstrate progress, rather than ensuring its ultimate success. This pathology leads to deferred problems and siloed knowledge.