Amazon refocused its top AI executive, Swami Sivasubramanian, solely on new generative AI products. This push for innovation risks deprioritizing established, widely-used tools like SageMaker, which many customers prefer for being cheaper and more practical than cutting-edge large language models (LLMs).
AWS leaders are concerned that building flagship products on third-party models like Anthropic's creates no sustainable advantage. They are therefore pressuring internal teams to use Amazon's own, often less capable, "Nova" models to develop a unique "special sauce" and differentiate their offerings from competitors.
An internal AWS document reveals that startups are diverting budgets toward AI models and inference, delaying adoption of traditional cloud services like compute and storage. This suggests AI spend is becoming a substitute for, not an addition to, core infrastructure costs, posing a direct threat to AWS's startup market share.
Integrating generative AI into Alexa was complex due to its massive scale: hundreds of millions of users, diverse devices, and millions of existing functions. The challenge was weaving the new tech into this landscape without disrupting the user experience, not just adding an LLM.
Within Amazon, the Nova family of AI models has earned the derisive nickname "Amazon Basics," a reference to the company's cheap private-label brand. This highlights internal sentiment that the models are reliable and cheap but not state-of-the-art, forcing many of Amazon's own AI products to rely on partner models.
Despite public messaging about culture or bureaucracy, internal memos and private conversations with leaders reveal that generative AI's productivity gains are the primary driver behind major tech layoffs, such as those at Amazon.
A fundamental shift is occurring where startups allocate limited budgets toward specialized AI models and developer tools, rather than defaulting to AWS for all infrastructure. This signals a de-bundling of the traditional cloud stack and a change in platform priorities.
An analyst categorizes large tech companies into AI "laggards, tweeners, and darlings." Tweeners, like Amazon and Meta, are in a precarious catch-up position. Unlike darlings, they must make significant investments and organizational shifts to improve their AI models and monetization, signaling a period of higher spending and strategic refocusing.
Enterprises will shift from relying on a single large language model to using orchestration platforms. These platforms will allow them to 'hot swap' various models—including smaller, specialized ones—for different tasks within a single system, optimizing for performance, cost, and use case without being locked into one provider.
As foundational AI models become commoditized, the key differentiator is shifting from marginal improvements in model capability to superior user experience and productization. Companies that focus on polish, ease of use, and thoughtful integration will win, making product managers the new heroes of the AI race.
AWS CEO Andy Jassy describes current AI adoption as a "barbell": AI labs on one end and enterprises using AI for productivity on the other. He believes the largest future market is the "middle"—enterprises deploying AI in their core production apps. AWS's strategy is to leverage its data gravity to win this massive, untapped segment.