The forthcoming OS2 introduces a "Creations" feature. Users can speak a prompt like "I want to play snake" and the device's agent will generate a functional application on the fly, tailored to the R1's hardware specifications.

Related Insights

For a startup introducing a new AI-native experience without control over an OS like iOS or Android, hardware was the only viable path. Launching as an app would get lost in the noise; the physical device created its own distribution channel.

The PhotoGenius mobile app uses a voice-first, conversational interface for nuanced photo editing commands like 'make me smile slightly without teeth'. This signals a potential paradigm shift in UX for creative tools, moving away from complex menus and sliders towards natural language interaction.

Instead of typing, dictating prompts for AI coding tools allows for faster and more detailed instructions. Speaking your thought process naturally includes more context and nuance, which leads to better results from the AI. Tools like Whisperflow are optimized with developer terminology for higher accuracy.

Beyond using pre-made skills, users can simply prompt Claude to create a new skill for itself. The AI understands the required format and can generate the instructional text for a new capability, such as crafting marketing hooks that create FOMO. This democratizes the process of AI customization.

The R1 is designed for fragmented, quick-use cases, acting as a dedicated device for tasks like translation or quick queries. This positions it as a competitor to specific apps like ChatGPT, not the iPhone, avoiding a direct battle with smartphones.

AI is moving beyond text generation. Using Claude's 'Artifact Builder' skill, it can create and deploy functional web applications directly in the chat window. A user can prompt it to build a tool, like a UTM link generator, and receive a usable app, not just code snippets.

The LAM is not a model in the traditional sense, but an agent system. It uses the best available LLMs for language understanding and connects them to Rabbit's proprietary tech for controlling actions, allowing for modular upgrades of the underlying AI.

A new software paradigm, "agent-native architecture," treats AI as a core component, not an add-on. This progresses in levels: the agent can do any UI action, trigger any backend code, and finally, perform any developer task like writing and deploying new code, enabling user-driven app customization.

The new Spiral app, with its complex UI and multiple features, was built almost entirely by one person. This was made possible by leveraging AI coding agents like Droid and Claude, which dramatically accelerates the development process from idea to a beautiful, functional product.

Rabbit identified a key demographic: children too old to be completely offline but too young for a smartphone and its distractions. The R1 serves as a controlled, dedicated AI device for this 'in-between' age group.