INAIR Is Turning ‘Computing Freedom’ Into a Basic Right for the Modern Digital Nomad, Says CEO Kurt Huang

In a world where work is increasingly mobile, digital, and privacy-conscious, INAIR is redefining how we engage with technology through the lens of augmented reality. As CEO of INAIR—the AR glasses brand under Beijing Duoping Future Technology Co., Ltd.—Kurt Huang is leading the charge to create next-generation, all-in-one AR computing solutions that are portable, efficient, and deeply intuitive.
Since its founding, INAIR has launched two generations of AR hardware that have received both critical acclaim and commercial success on platforms like JD.com and Tmall. Under Kurt’s leadership, the company is not only delivering sleek and high-performance AR devices but also pushing the boundaries of how immersive hardware can empower people to work more freely and securely.
With a founding team drawn from top internet and tech firms, and with deep expertise in AR software, hardware, and product development, INAIR is positioning itself as a frontrunner in the global race toward a more connected, productive future.
Benzinga’s Bibhu Pattnaik spoke with Huang, who shared his vision for the future of AR in the workplace, what distinguishes INAIR in the growing wearables market, and how the company is redefining the next generation of computing. Here’s an excerpt from the interaction.
What inspired the creation of INAIR, and how did your past experience in AI and system architecture shape its foundation?
We founded INAIR stemming deep insight into the evolution of technology and user needs. During my graduate studies in computer science at Tsinghua University, I focused on distributed systems and heterogeneous computing architecture. I also had stints at Broadcom and Intel where I led the development of multiple generations of chip platforms, overseeing hardware, software, and system-level R&D. In 2016, as AR technology began to mature, I saw the huge potential for practical applications of this technology. That’s when I began focusing solely on the AR industry.
As the XR (extended reality) market later emerged and fell into the trap of prioritizing extreme displays over interaction, I saw how devices pursued ever-more impressive display specs at the expense of comfort and wearability, or they piled on AI features while neglecting seamless contextual use. This droves us to rethink the entire technology stack using a full-stack mindset. We built a proprietary distributed rendering engine and spatial-awareness algorithms for industry-leading real-time rendering. By deeply integrating cross-device heterogeneous computing (collaboration between phone, PC, and cloud) with natural interaction protocols (gaze, touch, and voice), we ultimately created the world’s first Spatial Native Operating System – INAIR OS.
The essence of INAIR is to let technology evolve from being “used” to being “forgotten.”
In a rapidly evolving AR landscape, what distinguishes INAIR’s Spatial Computer from other solutions on the market?
We are at a turning point in the AR industry, moving from flashy tech demonstrations to deeply integrated real-world scenarios. INAIR is different in three main ways: a focus on different scenarios, system re-architecture, and ecosystem elevation.
Early AR glasses often relied on enterprise (B2B) customers to survive. After 2022, AR glasses saw success in consumer entertainment and video-watching scenarios. But by 2024-2025, differentiation and user experience improvements in entertainment had reached a plateau. We believe AR’s next frontier is mobile productivity, providing a lightweight, powerful solution for work on the go, not just entertainment.
Many competing AR devices have app ecosystems that don’t work across platforms, prohibiting data transfer. We addressed this with what we call a Spatial Native Architecture to reconstruct the tech stack. We adopted a modular design. The glasses themselves weigh just 77 grams, making them comfortable for all-day wear. They feature a continuously adjustable electrochromic tint (privacy mode) that users love. The compute unit (the “pod”) is about the size of a computer mouse, yet packs extremely powerful performance.
We developed our own INAIR OS, which is compatible with Android, iOS, Windows, and Mac ecosystems. Data flows seamlessly across devices. We also include a built-in AI agent that proactively predicts user needs and understands context to provide intelligent responses and assistance.
The entire user interface is in 3D. Users interact naturally using gaze, touch (via our Touchboard accessory), and voice. This depth in interaction makes the experience much more intuitive than traditional 2D screens.
What were some of the key technological or design challenges you faced during the development of INAIR, and how did your team overcome them?
Developing INAIR meant tackling three seemingly contradictory challenges: making the device extremely light but extremely powerful; making interaction very natural yet very precise; and keeping the technology cutting-edge butbroadly accessible.
Demand for lightweight design is paramount as users seem to notice every gram. High-performance computing, however, naturally brings high power consumption, heat, and weight. We had to innovate across the full spectrum, from materials to chip design. For example, out of hundreds of materials, we selected a magnesium-aluminum alloy that allowed us to shave 1.5 grams of weight – a significant achievement at this scale. We also used a modular (split) design, separating the display glasses from the compute pod, enabling the glasses to stay light and comfortable.
Traditional AR devices rely on voice commands or handheld controllers, prohibitive for quick input – especially in a work environment. We realized that for productivity use cases, a keyboard is essential. Therefore, we designed a custom accessory called the INAIR Touchboard. It combines a full-size keyboard with spatial interaction capabilities. This ensures instant, continuous productivity with a natural feel.
One of INAIR’s standout features is its seamless integration with Android apps and PC systems—how important was this interoperability in your product vision?
Interoperability is the soul of our product. Our lives are already filled with smartphones, laptops, tablets and other tech devices. Nobody wants to add AR glasses that just create more hassle. The real value comes from seamless integration.
For example, imagine you’re on a business trip and get a message from your boss on your phone. With our glasses, you could instantly stream your office computer, open the needed document, and send the file – all without fumbling between devices. This kind of effortless, seamless connectivity is exactly what users need. That’s why from day one, we made it a top priority to break down barriers between the Android, iOS, Windows, and Mac ecosystems.
INAIR’s native remote streaming technology promises a new kind of productivity—can you elaborate on how this enhances the user experience?
Our remote streaming technology is essentially a “spatial decoupling” of compute power that lets you tap into the most suitable hardware from anywhere. Here’s an example: you can leave your laptop at the office or home and travel with only the INAIR glasses and the mouse-sized compute pod. Using our proprietary INAIR OS, the glasses can wirelessly connect to your laptop thousands of miles away, bringing your desktop into the AR space in front of you. At the same time, the local compute pod handles lighter tasks like video conferencing. If you pair a Bluetooth keyboard, you have a full input setup, often with a more seamless experience than if you were using the laptop alone.
What’s truly revolutionary is the way this changes ownership of computing power. In the future, you could leave your top-tier GPU at home, and write code on a beach wearing our glasses, with heavy lifting done in the cloud. INAIR is turning “computing freedom” into a basic right for the modern digital nomad.
How has user feedback influenced product iterations, and what insights have surprised or challenged your initial assumptions?
Users are increasingly hungry for immersive 3D experiences. From movies to apps, flat 2D content just doesn’t satisfy them anymore. They want to use mature AR hardware (like glasses) to consume and interact with 3D content. For example, spatial videos shot on an iPhone can now be watched on INAIR, and those old 2D photos sitting in your photo album can be converted into 3D views with a single click.
Users aren’t happy with just “it works”. They expect “imperceptible switching” between tasks and devices. This means we have to constantly improve latency and responsiveness.
Today’s large AI models are powerful, but they’re often not user-friendly. Many AI apps (like DeepSeek or ChatGPT) just present a simple text box, and different apps each have their own AI assistant, leading to siloed experiences. INAIR’s approach is to bake AI capability into the system layer so that you get a unified, intuitive experience no matter what software you’re using.
As spatial computing gains traction, how do you see INAIR contributing to or redefining this new computing paradigm?
Our mission is to bring AI+AR technology paradigm shift to mobile productivity. And expect to create a future where work is no longer confined by screens or locations, empowering everyone to collaborate effortlessly in a seamless blend of the physical and digital worlds.
With global expansion on the horizon, what markets are you most focused on?
The North American and European markets are at the core of our global strategy. This decision is based on INAIR’s own technological layout, scenario adaptability, and long-term market insights.
Users in North America and Europe are highly receptive to cutting-edge technologies, especially the demand of the combination of “lightweight, portable computing” and “high performance.” People in these regions have strong needs for working from home and flexible work setups, and they aspire to a liberated, location-independent way of working. This vision aligns closely with INAIR’s philosophy.