October 17, 2025

Auki community update recap: Oct 17, 2025

When Robots, Glasses and Phones Share a Map

Setting the Stage: Collaborative Spatial Computing

We opened this update from the Hong Kong lab with a simple idea: “A phone like this or a pair of glasses like these are actually robots with no arms and legs.”

That’s the core of what we do: collaborative spatial computing. We teach phones, glasses and robots to share the same understanding of physical space.

  • The real world web (Auki network) lets devices browse physical locations the way browsers visit websites.
  • Devices don’t need a hard-coded local map. They connect to a domain, download the spatial context, and can immediately navigate and coordinate.

If you’ve seen our earlier demo of Terri navigating a conference he’d never seen before by using a map built from phone video, that’s the same idea in action.

Recap: Terri, Gotu and App Clips

To ground it, we started with a handheld demo:

  1. Scan a QR on the floor with an iPhone.
  2. iOS opens an App Clip for our Gotu app clip (no install needed).
  3. Scan again and the phone loads:
    • The local domain’s map
    • Points of interest (like “Nils’s desk”)
    • AR navigation as dotted lines on the floor, plus anchored AR content (e.g. in-store ads).

The phone is talking to our network, pulling down a shared map of the lab and navigating within it.

We’ve shown similar things with Terri before:

  • Terri connects to the same domain, downloads the map, and can be told “go to this location” without ever having seen the space before.
  • That’s our earlier large-scale autonomous humanoid navigation milestone.

This time, we wanted to go beyond “phone and one robot share a map” to many devices orchestrating together.

New Milestone: App- and Device-Agnostic Spatial Orchestration

The main goal of this session was to show app agnostic, device agnostic, spatial orchestration.

In practice, that meant getting:

  • Mentra Live smart glasses
  • Terri (the Unitree humanoid)
  • An X3 warehouse-style robot

…all to act in the same shared space, triggered from different apps, made by different vendors.

We had a few live-demo gremlins (and beer), but we got three key behaviours working:

1. Glasses call a robot over

From a video recorded through Mentra glasses, you can hear Arshak saying: “Hey Terri, come here.”

What’s happening under the hood:

  • The glasses stream their view into our network.
  • Our reconstruction + localization stack pins his exact pose down to the centimeter.
  • That pose is shared on the domain.
  • Terri subscribes to that and receives a “go to glasses location” command.

Terri then walks across the lab to the person wearing the glasses. It doesn’t matter that the glasses and robot are built by different manufacturers, running different software – they meet in the same shared coordinate system.

2. The other robot does it too

We repeated the same behaviour with the Padbot X3:

  • Glasses say “come here.”
  • X3 gets the glasses' pose via the domain.
  • X3 autonomously drives over to the glasses.

Again, no direct app-to-app integration. The only shared “language” is:

  • Pose + map on the real world web
  • A small network message: “go to this pose”

3. Robots navigate to each other

Then we flipped things around:

  • We sent the X3 to a known point of interest (“merch store”) in the domain.
  • From Terri's web interface, we hit “Go to robot”
  • Terri used the shared domain data to navigate to the X3’s current location, not just a static waypoint.

He did it with a weird sideways crab walk (that’s a Unitree / ROS Nav2 issue, not our logic), but he got there.

So in one session, we showed:

  • Glasses → robot (Terri → glasses)
  • Glasses → robot (X3 → glasses)
  • Robot → robot (Terri → X3)

All without bespoke wiring between vendors or apps. That’s the point: orchestration lives in the network, not in one monolithic stack.

Under the Hood: Domains as “Data Brokers”

We also talked a bit about how we’re evolving our design for shared spatial data.

In the early days we had a service (Odal) where everyone had to upload assets to a common CDN to be interoperable. Now we’re moving toward:

  • Domain servers as data brokers
    • Each app (Gotu, McKenna, etc.) can store its own assets on its own infra.
    • The domain holds pointers and metadata: “these assets live over there, in that app, in this coordinate system.”
    • Other apps can query the domain and discover those assets without copying them into a central store.

As Nils put it on the call: “You don’t actually have to upload your assets to Auki to make them visible across applications. You just have to register with a specific domain where to go get it.”

This keeps things:

  • Interoperable – multiple apps can render the same objects and anchors.
  • Decentralizable – data can live where its owner wants it.
  • Aligned with our DePIN model – venues and developers stay in control of their data.

Where This Goes Next

We also shared some near-term context:

  • A trip to IROS in Hangzhou, followed by meetings in Shanghai with a major Chinese robotics partner. They’ve already talked about tailoring their next-gen robot to be “Auki-ready out of the box” and offering us partial exclusivity in certain geographies and industries.
  • Ongoing work with Mentra on the glasses side, and with builders like Mika (OneShot copilot, pizza demo) on top of our network.
  • A steady stream of robotics and AR teams who want to integrate with the real world web to improve deployment speed and economics.

Nils ended the call with the simple take-home: “You honestly saw a bit of robot history today… app agnostic, device agnostic, spatial orchestration. We did it. It happened here.”

There’s still a lot of polish and robustness work ahead, but the core behaviour is real: phones, glasses and robots can now coordinate in physical space through our network, without sharing a vendor or a codebase.

Watch the whole update on X.

アウキ・ラボについて

Aukiはポーズメッシュという地球上、そしてその先の1000億の人々、デバイス、AIのための分散型機械認識ネットワークを構築しています。ポーズメッシュは、機械やAIが物理的世界を理解するために使用可能な、外部的かつ協調的な空間感覚です。

私たちの使命は、人々の相互認知能力、つまり私たちが互いに、そしてAIとともに考え、経験し、問題を解決する能力を向上させることです。人間の能力を拡大させる最も良い方法は、他者と協力することです。私たちは、意識を拡張するテクノロジーを構築し、コミュニケーションの摩擦を減らし、心の橋渡しをします。

ポーズメッシュについて

ポーズメッシュは、分散型で、ブロックチェーンベースの空間コンピューティングネットワークを動かすオープンソースのプロトコルです。

ポーズメッシュは、空間コンピューティングが協調的でプライバシーを保護する未来をもたらすよう設計されています。いかなる組織の監視能力も制限し、空間のプライベートな地図の自己所有権を奨励します。

分散化はまた、特に低レイテンシが重要な共同ARセッションにおいて、競争優位性を有します。ポスメッシュは分散化運動の次のステップであり、成長するテック大手のパワーに対抗するものです。

アウキ・ラボはポスメッシュにより、ポーズメッシュのソフトウェア・インフラの開発を託されました。

Keep up with Auki

ニュース、写真、イベント、ビジネスの最新情報をお届けします。

最新情報