Qualcomm's CEO argues the immediate value of AI PCs is economic, not experiential. SaaS providers, facing massive cloud AI costs, will drive adoption by requiring on-device processing to offload inference, which fundamentally improves their business model.
AI's high computational cost (COGS) threatens SaaS margins. Nadella explains that just as the cloud expanded the market for computing far beyond the original server-license model, AI will create entirely new categories and user bases, offsetting the higher costs.
While often discussed for privacy, running models on-device eliminates API latency and costs. This allows for near-instant, high-volume processing for free, a key advantage over cloud-based AI services.
The compute-heavy nature of AI makes traditional 80%+ SaaS gross margins impossible. Companies should embrace lower margins as proof of user adoption and value delivery. This strategy mirrors the successful on-premise to cloud transition, which ultimately drove massive growth for companies like Microsoft.
AI is making core software functionality nearly free, creating an existential crisis for traditional SaaS companies. The old model of 90%+ gross margins is disappearing. The future will be dominated by a few large AI players with lower margins, alongside a strategic shift towards monetizing high-value services.
The next major hardware cycle will be driven by user demand for local AI models that run on personal machines, ensuring privacy and control away from corporate or government surveillance. This shift from a purely cloud-centric paradigm will spark massive demand for more powerful personal computers and laptops.
Qualcomm's CEO argues that real-world context gathered from personal devices ("the Edge") is more valuable for training useful AI than generic internet data. Therefore, companies with a strong device ecosystem have a fundamental advantage in the long-term AI race.
The traditional SaaS model—high R&D/sales costs, low COGS—is being inverted. AI makes building software cheap but running it expensive due to high inference costs (COGS). This threatens profitability, as companies now face high customer acquisition costs AND high costs of goods sold.
A cost-effective AI architecture involves using a small, local model on the user's device to pre-process requests. This local AI can condense large inputs into an efficient, smaller prompt before sending it to the expensive, powerful cloud model, optimizing resource usage.
The shift to usage-based pricing for AI tools isn't just a revenue growth strategy. Enterprise vendors are adopting it to offset their own escalating cloud infrastructure costs, which scale directly with customer usage, thereby protecting their profit margins from their own suppliers.
The biggest risk to the massive AI compute buildout isn't that scaling laws will break, but that consumers will be satisfied with a "115 IQ" AI running for free on their devices. If edge AI is sufficient for most tasks, it undermines the economic model for ever-larger, centralized "God models" in the cloud.