To break the data bottleneck in AI protein engineering, companies now generate massive synthetic datasets. By creating novel "synthetic epitopes" and measuring their binding, they can produce thousands of validated positive and negative training examples in a single experiment, massively accelerating model development.
The power of AI for Novonesis isn't the algorithm itself, but its application to a massive, well-structured proprietary dataset. Their organized library of 100,000 strains allows AI to rapidly predict protein shapes and accelerate R&D in ways competitors cannot match.
To evolve AI from pattern matching to understanding physics for protein engineering, structural data is insufficient. Models need physical parameters like Gibbs free energy (delta-G), obtainable from affinity measurements, to become truly predictive and transformative for therapeutic development.
The cost to generate the volume of protein affinity data from a single multi-week A-AlphaBio experiment using standard methods like surface plasmon resonance (SPR) would be an economically unfeasible $100-$500 million. This staggering cost difference illustrates the fundamental barrier that new high-throughput platforms are designed to overcome.
Public internet data has been largely exhausted for training AI models. The real competitive advantage and source for next-generation, specialized AI will be the vast, untapped reservoirs of proprietary data locked inside corporations, like R&D data from pharmaceutical or semiconductor companies.
The next leap in biotech moves beyond applying AI to existing data. CZI pioneers a model where 'frontier biology' and 'frontier AI' are developed in tandem. Experiments are now designed specifically to generate novel data that will ground and improve future AI models, creating a virtuous feedback loop.
The future of valuable AI lies not in models trained on the abundant public internet, but in those built on scarce, proprietary data. For fields like robotics and biology, this data doesn't exist to be scraped; it must be actively created, making the data generation process itself the key competitive moat.
Traditional antibody optimization is a slow, iterative process of improving one property at a time, taking 1-3 years. By using high-throughput data to train machine learning models, companies like A-AlphaBio can now simultaneously optimize for multiple characteristics like affinity, stability, and developability in a single three-month process.
Current AI for protein engineering relies on small public datasets like the PDB (~10,000 structures), causing models to "hallucinate" or default to known examples. This data bottleneck, orders of magnitude smaller than data used for LLMs, hinders the development of novel therapeutics.
The bottleneck for AI in drug development isn't the sophistication of the models but the absence of large-scale, high-quality biological data sets. Without comprehensive data on how drugs interact within complex human systems, even the best AI models cannot make accurate predictions.
The founder of AI and robotics firm Medra argues that scientific progress is not limited by a lack of ideas or AI-generated hypotheses. Instead, the critical constraint is the physical capacity to test these ideas and generate high-quality data to train better AI models.