Adding existing health sensors like heart rate monitors to new devices like smart glasses offers diminishing returns. The real innovation and value proposition for new wearables lies in developing new interaction paradigms, particularly advanced, low-latency audio interfaces for seamless communication in any environment.
AI devices must be close to human senses to be effective. Glasses are the most natural form factor as they capture sight, sound, and are close to the mouth for speech. This sensory proximity gives them an advantage over other wearables like earbuds or pins.
Startups are overwhelmingly focusing on rings for new AI wearables. This form factor is seen as ideal for discrete, dedicated use cases like health tracking and quick AI voice interactions, separating them from the general-purpose smartphone and suggesting a new, specialized device category is forming.
The ultimate winner in the AI race may not be the most advanced model, but the most seamless, low-friction user interface. Since most queries are simple, the battle is shifting to hardware that is 'closest to the person's face,' like glasses or ambient devices, where distribution is king.
Instead of visually-obstructive headsets or glasses, the most practical and widely adopted form of AR will be audio-based. The evolution of Apple's AirPods, integrated seamlessly with an iPhone's camera and AI, will provide contextual information without the social and physical friction of wearing a device on your face.
Leaks about OpenAI's hardware team exploring a behind-the-ear device suggest a strategic interest in ambient computing. This moves beyond screen-based chatbots and points towards a future of always-on, integrated AI assistants that compete directly with audio wearables like Apple's AirPods.
The most compelling user experience in Meta's new glasses isn't a visual overlay but audio augmentation. A feature that isolates and live-transcribes one person's speech in a loud room creates a "super hearing" effect. This, along with live translation, is a unique value proposition that a smartphone cannot offer.
After the failure of ambitious devices like the Humane AI Pin, a new generation of AI wearables is finding a foothold by focusing on a single, practical use case: AI-powered audio recording and transcription. This refined focus on a proven need increases their chances of survival and adoption.
Past smart glasses failed not because of the hardware, but the lack of a compelling use case. Hassabis argues a universal, context-aware digital assistant that works seamlessly across all devices is the true 'killer app' that will finally make wearables like smart glasses indispensable.
While many companies pursue visual AR, audio AR ("hearables") remains an untapped frontier. The auditory system has more available bandwidth than the visual system, making it ideal for layering non-intrusive, real-time information for applications like navigation, trading, or health monitoring.
Razer's bet for bringing AI into the real world is on headphones. They argue it's a universal, unobtrusive form factor that leverages existing user behavior, avoiding the adoption friction and social awkwardness associated with smart glasses or other novel devices.