While phones are single-app devices, augmented reality glasses can replicate a multi-monitor desktop experience on the go. This "infinite workstation" for multitasking is a powerful, under-discussed utility that could be a primary driver for AR adoption.
The ultimate winner in the AI race may not be the most advanced model, but the most seamless, low-friction user interface. Since most queries are simple, the battle is shifting to hardware that is 'closest to the person's face,' like glasses or ambient devices, where distribution is king.
AI will operate our computers, making our primary role monitoring. This frees people from desks, accelerating the need for a mobile interface like AR glasses to observe AI and bring work into the real world, transforming productivity.
The seemingly unsuccessful thin iPhone Air is likely a strategic R&D initiative to master miniaturizing core components like silicon and PCBs. This effort paves the way for next-generation wearables like AI glasses, making the phone a public "road sign" for future products rather than a standalone sales priority.
Instead of visually-obstructive headsets or glasses, the most practical and widely adopted form of AR will be audio-based. The evolution of Apple's AirPods, integrated seamlessly with an iPhone's camera and AI, will provide contextual information without the social and physical friction of wearing a device on your face.
While chatbots are an effective entry point, they are limiting for complex creative tasks. The next wave of AI products will feature specialized user interfaces that combine fine-grained, gesture-based controls for professionals with hands-off automation for simpler tasks.
Advanced AR glasses create a new social problem of "deep fake eye contact," where users can feign presence in a conversation while mentally multitasking. This technology threatens to erode genuine human connection by making it impossible to know if you have someone's true attention.
We don't perceive reality directly; our brain constructs a predictive model, filling in gaps and warping sensory input to help us act. Augmented reality isn't a tech fad but an intuitive evolution of this biological process, superimposing new data onto our brain's existing "controlled model" of the world.
While chat works for human-AI interaction, the infinite canvas is a superior paradigm for multi-agent and human-AI collaboration. It allows for simultaneous, non-distracting parallel work, asynchronous handoffs, and persistent spatial context—all of which are difficult to achieve in a linear, turn-based chat interface.
Despite the hype, AI's impact on daily life remains minimal because most consumer apps haven't changed. The true societal shift will occur when new, AI-native applications are built from the ground up, much like the iPhone enabled a new class of apps, rather than just bolting AI features onto old frameworks.
AR and robotics are bottlenecked by software's inability to truly understand the 3D world. Spatial intelligence is positioned as the fundamental operating system that connects a device's digital "brain" to physical reality. This layer is crucial for enabling meaningful interaction and maturing the hardware platforms.