
Nils isback in Hong Kong after the ASEAN AI Summit in Kuala Lumpur, where Terri, our Unitree G1 humanoid intern, went on stage with the Prime Minister of Malaysia.
Security had a radio frequency jammer running to block hostile drones. Unfortunately, it also blocked our control link to Terri. Result: he just wouldn’t let go of the phone he was supposed to hand over.
The clip made Malaysian national TV, everyone laughed, and we’ve already been invited back for another event. It also sparked a useful point: for robots to really work in the world, they need:
That’s exactly what we’re building.
For anyone new joining from Malaysia, Base, or elsewhere: Auki’s mission is to make the physical world accessible to AI.
We work on collaborative machine perception, mapping, and positioning, so devices like robots, glasses and phones can:
We don’t build foundation models. Instead, we’re a bridge to the physical world for every model. You can use Auki + OpenAI, Auki + Claude, etc. We give them spatial context and embodiment.
A big focus this week: Mentra’s programmable smart glasses and how they plug into the Auki network.
Glasses provide live visual context, but devs need somewhere to send and process that video. We’re building that pipeline:
“You don’t want to do the compute on the camera. You want the compute to happen somewhere else, and Auki is providing turnkey infrastructure for that.”
Our team is working with Mentra both in Hong Kong and in San Francisco to make this pipeline open source and reusable. Expect:
Most robotics companies chase locomotion and manipulation first (walking and grabbing). We’re doubling down on another strategy: “You can ship half of a robot in a pair of glasses and let the human be the arms and legs.”
We focus on:
Glasses + the Auki network = AI copilots for physical work, right now, without waiting for humanoids to fully mature.
Our own retail co-pilot, Cactus, is one example. More copilots from community builders are coming soon.
A good question from the community: if we’re decentralized and respect data autonomy, how do we compete with giants like OpenAI?
Key points:
For non-sensitive workloads (like consumer 3D memory capture), reconstruction jobs can be sent to community GPUs. For sensitive commercial work, customers can limit processing to trusted machines or partial views (e.g., one shelf, not a whole store).
We’re preparing to launch reconstruction nodes and already have an early partner: a consumer app for capturing 3D memories (“Instagram for spatial memories”).
Each job:
We plan to:
Short, self-contained jobs make this a great first fit for community GPUs.
Travel change: instead of Europe, I’m heading to the US.
Our main office remains in Hong Kong; SF becomes our US hub.
Commercially:
We also have an upcoming meeting with a C-level exec at one of the world’s biggest hotel chains to explore how physical AI and copilots can help them beyond the retail context.
Auki is making the physical world accessible to AI by building the real world web: a way for robots and digital devices like smart glasses and phones to browse, navigate, and search physical locations.
70% of the world economy is still tied to physical locations and labor, so making the physical world accessible to AI represents a 3X increase in the TAM of AI in general. Auki's goal is to become the decentralized nervous system of AI in the physical world, providing collaborative spatial reasoning for the next 100bn devices on Earth and beyond.
X | Discord | LinkedIn | YouTube | Whitepaper | auki.com