Putting frontier AI models on every device
We’re building text and voice intelligence that runs everywhere—phones, wearables, even microcontrollers—no costly GPU required.
Our Two Pillars
On-Device TTS Model
High-fidelity text-to-speech that runs directly on CPUs, mobile NPUs, and IoT processors—so you get lightning-fast, private, and energy-efficient voice everywhere.
Experience our research preview.
Smallest Reasoning Model
Most compact transformer: a razor-thin reasoning engine that runs effortlessly on CPUs and microcontrollers. It delivers lightning-fast logic, context understanding, and decision-making without bulky hardware.
Join 0N Labs
We’re not doing a big hiring push right now—but if you’re exceptionally smart and passionate about on-device AI, send your résumé and any research papers to careers@0nlabs.ai and tell us what excites you.