We scan new podcasts and send you the top 5 insights daily.
Ajay Banga argues that the real AI opportunity in emerging markets isn't large, power-hungry models. Instead, it's "Small AI"—localized applications on phones for tasks like medical diagnosis or farming advice. These are more feasible given limitations on electricity, computing power, and data sovereignty.
Previously, the high cost of software development meant products needed to achieve scale to be successful. AI lowers this barrier, making it practical to build custom applications for very small, niche audiences (e.g., a Super Bowl app for 15 family members) that were never financially viable before.
Instead of competing to build sovereign AI stacks from the chip up, India's strategic edge is in applying commoditized AI models to its unique, population-scale problems. This leverages the country's deep experience with real-world, large-scale implementation.
Joe Tsai reframes the US-China 'AI race' as a marathon won by adoption speed, not model size. He notes China’s focus on open source and smaller, specialized models (e.g., for mobile devices) is designed for faster proliferation and practical application. The goal is to diffuse technology throughout the economy quickly, rather than simply building the single most powerful model.
India is leveraging its upcoming AI Impact Summit to establish itself as the voice for the Global South in AI policy. By championing inclusive AI and showcasing successful development applications in healthcare and agriculture, India aims to create an alternative to the Western-centric AI narrative.
Contrary to the belief that AI will kill most apps, lower development costs will make it profitable to build and maintain software for smaller, niche audiences. This affordability will likely lead to an explosion of specialized apps rather than market consolidation.
Indian startups are carving a competitive niche by focusing on the AI application layer. Instead of building foundational models, their strength lies in developing and deploying practical AI solutions that solve real-world problems, which is where they can effectively compete on a global scale.
The trend for language models is diverging: massive models in the cloud and smaller models (SLMs) at the edge. These SLMs, while lacking the broad knowledge of their larger counterparts, are highly effective when fine-tuned for specific domains and specialized data, making them ideal for device-level intelligence.
For India, "leapfrogging" with AI means overcoming systemic resource shortages. AI acts as a horizontal productivity multiplier, enabling, for example, a limited number of doctors to deliver better healthcare outcomes through AI-powered diagnostics, thus enhancing sectoral capacity without massive infrastructure investment.
The true commercial impact of AI will likely come from small, specialized "micro models" solving boring, high-volume business tasks. While highly valuable, these models are cheap to run and cannot economically justify the current massive capital expenditure on AGI-focused data centers.
While the most powerful AI will reside in large "god models" (like supercomputers), the majority of the market volume will come from smaller, specialized models. These will cascade down in size and cost, eventually being embedded in every device, much like microchips proliferated from mainframes.