
We opened this update from the Hong Kong lab with a simple idea: “A phone like this or a pair of glasses like these are actually robots with no arms and legs.”
That’s the core of what we do: collaborative spatial computing. We teach phones, glasses and robots to share the same understanding of physical space.
If you’ve seen our earlier demo of Terri navigating a conference he’d never seen before by using a map built from phone video, that’s the same idea in action.
To ground it, we started with a handheld demo:
The phone is talking to our network, pulling down a shared map of the lab and navigating within it.
We’ve shown similar things with Terri before:
This time, we wanted to go beyond “phone and one robot share a map” to many devices orchestrating together.
The main goal of this session was to show app agnostic, device agnostic, spatial orchestration.
In practice, that meant getting:
…all to act in the same shared space, triggered from different apps, made by different vendors.
We had a few live-demo gremlins (and beer), but we got three key behaviours working:
From a video recorded through Mentra glasses, you can hear Arshak saying: “Hey Terri, come here.”
What’s happening under the hood:
Terri then walks across the lab to the person wearing the glasses. It doesn’t matter that the glasses and robot are built by different manufacturers, running different software – they meet in the same shared coordinate system.
We repeated the same behaviour with the Padbot X3:
Again, no direct app-to-app integration. The only shared “language” is:
Then we flipped things around:
He did it with a weird sideways crab walk (that’s a Unitree / ROS Nav2 issue, not our logic), but he got there.
So in one session, we showed:
All without bespoke wiring between vendors or apps. That’s the point: orchestration lives in the network, not in one monolithic stack.
We also talked a bit about how we’re evolving our design for shared spatial data.
In the early days we had a service (Odal) where everyone had to upload assets to a common CDN to be interoperable. Now we’re moving toward:
As Nils put it on the call: “You don’t actually have to upload your assets to Auki to make them visible across applications. You just have to register with a specific domain where to go get it.”
This keeps things:
We also shared some near-term context:
Nils ended the call with the simple take-home: “You honestly saw a bit of robot history today… app agnostic, device agnostic, spatial orchestration. We did it. It happened here.”
There’s still a lot of polish and robustness work ahead, but the core behaviour is real: phones, glasses and robots can now coordinate in physical space through our network, without sharing a vendor or a codebase.
Auki is making the physical world accessible to AI by building the real world web: a way for robots and digital devices like smart glasses and phones to browse, navigate, and search physical locations.
70% of the world economy is still tied to physical locations and labor, so making the physical world accessible to AI represents a 3X increase in the TAM of AI in general. Auki's goal is to become the decentralized nervous system of AI in the physical world, providing collaborative spatial reasoning for the next 100bn devices on Earth and beyond.
X | Discord | LinkedIn | YouTube | Whitepaper | auki.com