Despite the hype, LinkedIn found that third-party AI tools for coding and design don't work out-of-the-box on their complex, legacy stack. Success requires deep customization, re-architecting internal platforms for AI reasoning, and working in "alpha mode" with vendors to adapt their tools.
New McKinsey research reveals a significant AI adoption gap. While 88% of organizations use AI, nearly two-thirds haven't scaled it beyond pilots, meaning they are not behind their peers. This explains why only 39% report enterprise-level EBIT impact. True high-performers succeed by fundamentally redesigning workflows, not just experimenting.
Simply offering the latest model is no longer a competitive advantage. True value is created in the system built around the model—the system prompts, tools, and overall scaffolding. This 'harness' is what optimizes a model's performance for specific tasks and delivers a superior user experience.
Don't just sprinkle AI features onto your existing product ('AI at the edge'). Transformative companies rethink workflows and shrink their old codebase, making the LLM a core part of the solution. This is about re-architecting the solution from the ground up, not just enhancing it.
High productivity isn't about using AI for everything. It's a disciplined workflow: breaking a task into sub-problems, using an LLM for high-leverage parts like scaffolding and tests, and reserving human focus for the core implementation. This avoids the sunk cost of forcing AI on unsuitable tasks.
Enterprises struggle to get value from AI due to a lack of iterative, data-science expertise. The winning model for AI companies isn't just selling APIs, but embedding "forward deployment" teams of engineers and scientists to co-create solutions, closing the gap between prototype and production value.
Off-the-shelf AI models can only go so far. The true bottleneck for enterprise adoption is "digitizing judgment"—capturing the unique, context-specific expertise of employees within that company. A document's meaning can change entirely from one company to another, requiring internal labeling.
The true enterprise value of AI lies not in consuming third-party models, but in building internal capabilities to diffuse intelligence throughout the organization. This means creating proprietary "AI factories" rather than just using external tools and admiring others' success.
A critical learning at LinkedIn was that pointing an AI at an entire company drive for context results in poor performance and hallucinations. The team had to manually curate "golden examples" and specific knowledge bases to train agents effectively, as the AI couldn't discern quality on its own.
Dylan Field is skeptical that disposable, AI-generated apps will replace complex SaaS products. Real business software must handle countless edge cases, scale with teams, and support shared workflows—a level of complexity and institutional knowledge that today's agents are far from mastering.
Using a composable, 'plug and play' architecture allows teams to build specialized AI agents faster and with less overhead than integrating a monolithic third-party tool. This approach enables the creation of lightweight, tailored solutions for niche use cases without the complexity of external API integrations, containing the entire workflow within one platform.