
We opened this update from the Hong Kong lab with a simple idea: “A phone like this or a pair of glasses like these are actually robots with no arms and legs.”
That’s the core of what we do: collaborative spatial computing. We teach phones, glasses and robots to share the same understanding of physical space.
If you’ve seen our earlier demo of Terri navigating a conference he’d never seen before by using a map built from phone video, that’s the same idea in action.
To ground it, we started with a handheld demo:
The phone is talking to our network, pulling down a shared map of the lab and navigating within it.
We’ve shown similar things with Terri before:
This time, we wanted to go beyond “phone and one robot share a map” to many devices orchestrating together.
The main goal of this session was to show app agnostic, device agnostic, spatial orchestration.
In practice, that meant getting:
…all to act in the same shared space, triggered from different apps, made by different vendors.
We had a few live-demo gremlins (and beer), but we got three key behaviours working:
From a video recorded through Mentra glasses, you can hear Arshak saying: “Hey Terri, come here.”
What’s happening under the hood:
Terri then walks across the lab to the person wearing the glasses. It doesn’t matter that the glasses and robot are built by different manufacturers, running different software – they meet in the same shared coordinate system.
We repeated the same behaviour with the Padbot X3:
Again, no direct app-to-app integration. The only shared “language” is:
Then we flipped things around:
He did it with a weird sideways crab walk (that’s a Unitree / ROS Nav2 issue, not our logic), but he got there.
So in one session, we showed:
All without bespoke wiring between vendors or apps. That’s the point: orchestration lives in the network, not in one monolithic stack.
We also talked a bit about how we’re evolving our design for shared spatial data.
In the early days we had a service (Odal) where everyone had to upload assets to a common CDN to be interoperable. Now we’re moving toward:
As Nils put it on the call: “You don’t actually have to upload your assets to Auki to make them visible across applications. You just have to register with a specific domain where to go get it.”
This keeps things:
We also shared some near-term context:
Nils ended the call with the simple take-home: “You honestly saw a bit of robot history today… app agnostic, device agnostic, spatial orchestration. We did it. It happened here.”
There’s still a lot of polish and robustness work ahead, but the core behaviour is real: phones, glasses and robots can now coordinate in physical space through our network, without sharing a vendor or a codebase.
Aukiはポーズメッシュという地球上、そしてその先の1000億の人々、デバイス、AIのための分散型機械認識ネットワークを構築しています。ポーズメッシュは、機械やAIが物理的世界を理解するために使用可能な、外部的かつ協調的な空間感覚です。
私たちの使命は、人々の相互認知能力、つまり私たちが互いに、そしてAIとともに考え、経験し、問題を解決する能力を向上させることです。人間の能力を拡大させる最も良い方法は、他者と協力することです。私たちは、意識を拡張するテクノロジーを構築し、コミュニケーションの摩擦を減らし、心の橋渡しをします。
ポーズメッシュは、分散型で、ブロックチェーンベースの空間コンピューティングネットワークを動かすオープンソースのプロトコルです。
ポーズメッシュは、空間コンピューティングが協調的でプライバシーを保護する未来をもたらすよう設計されています。いかなる組織の監視能力も制限し、空間のプライベートな地図の自己所有権を奨励します。
分散化はまた、特に低レイテンシが重要な共同ARセッションにおいて、競争優位性を有します。ポスメッシュは分散化運動の次のステップであり、成長するテック大手のパワーに対抗するものです。
アウキ・ラボはポスメッシュにより、ポーズメッシュのソフトウェア・インフラの開発を託されました。