September 5, 2025

Auki community update recap: Sep 5, 2025

World's First Large-Scale Humanoid Navigation Demo

A World First: Humanoid Walking a “New” Space

This week we did a live demo we’re pretty proud of: large-scale autonomous humanoid navigation using the Auki network.

Terri McKenna, our Unitree G1 humanoid resources intern, started the session like this:

  • Freshly booted
  • “No memory of ever having been here” in our Hong Kong lab
  • Only his stock sensors (RealSense depth camera + LiDAR)
  • Only his onboard compute

We walked him to one of our floor markers. As soon as he saw the QR code, he:

  1. Connected to the internet
  2. Asked the Auki network: “If I see this marker, where am I?”
  3. Got routed to the edge node hosting the map of our lab
  4. Downloaded that map and started navigating autonomously

From there, Phil simply told Terri: go to Nils’s desk. And he did it, autonomously.

We showed the LiDAR view on screen, the planned path, and the remote lying untouched on the table. Apart from one comic moment when a door confused his obstacle avoidance (classic robotics…), he walked the route himself.

Next Tuesday we’ll do the same thing at the WOW Summit in Hong Kong, in a venue Terri has definitely never seen before.

Why Physical AI Is the “Final Frontier”

Before the demo, we zoomed out a bit.

In January, Jensen Huang said we’re moving from generative AI to agentic AI, and that the final frontier is physical AI – systems that understand space and can act in the physical world.

The reason we care so much about that: 70% of the world’s GDP is still tied to physical locations and labor.

Going from agentic AI (purely digital) to physical AI represents at least a 3× increase in TAM for the whole AI industry.

That’s why we exist. Auki makes the physical world accessible to AI and robots.

We do that with what we call the real world web – our network that lets digital devices browse physical locations the way browsers visit websites, making them:

  • Navigable
  • Searchable
  • Accessible to AI

Each venue can self-host its own digital representation. Robots and other devices connect to local edge nodes instead of one giant central map in someone’s cloud.

The Six Layers Robots Really Need

We also revisited the six layers of tech general-purpose robots need before they’ll be truly useful in everyday life:

  1. Locomotion – walking / moving without falling
  2. Manipulation – grabbing, lifting, moving objects
  3. Perception – spatio-semantic perception (what is it? how far is it?)
  4. Mapping – object permanence: remembering where things are
  5. Positioning – knowing where am I relative to the map
  6. Applications – tying it all together into real tasks

Today, most of the robotics hype is about 1 and 2. Even with the best demos (and some impressive VLAs), you still can’t reliably tell a robot, “Go empty all the trash cans in this office.”

Because:

  • VLAs are mostly trained on internet data, not detailed motion trajectories
  • They don’t have object permanence
  • Robots still need a map and a positioning system that actually works indoors

As we like to put it: “A robot has about as much use of a GPS as you have of a fax machine.”

GPS doesn’t work indoors and can’t tell you where your desk or kitchen is. So we focus on the middle layers. That’s what the real world web provides: an external sense of space that robots can read from and write to.

Copilots Before Full Robots: Cactus in the Wild

Our view is that you don’t need perfect humanoids to start deploying physical AI. Modern smartphones are already capable of spatial computing, and they provide an early and viable form factor for AI copilots.

Just like white collar workers now have AI copilots in tools like ChatGPT, we think every physical job will eventually get a copilot too.

Our first one is Cactus, the spatial AI for retail. It runs on phones today and will run on smart glasses by the end of the year.

Cactus lets stores:

  • Use a phone to build a “real-world website” (a domain / digital twin)
  • Track where products are and how they sell in space
  • Generate heat maps of shelf and aisle performance
  • Navigate staff to baskets of items, not just single products

We’ve been able to reduce the walking distance for click-and-collect staff by up to 40%.

We’re already:

  • Deploying to over a thousand locations
  • Collecting millions of dollars in pilot revenue
  • Sitting on an open pipeline of $150M+ in pilot opportunities

All this is happening now, with phones.

Glasses and Robots: Same Brain, Different Bodies

From phones, we’re moving into smart glasses and robots, all using the same underlying network.

On glasses, we’ve partnered with Mentra, who are building open, programmable camera glasses. By giving these smart glasses spatial awareness, we open up powerful new ways to interact with AI that can see what you see and provide guidance for physical work.

But the same external sense of space is already helping robots too.

After we showed Terri, we brought out a simpler wheeled robot in our fake grocery store and asked it: “Where is the Finnish Powerball Ultimate All-in-One?”

It queried the same map Terri uses via the Auki network, planned a path, drove to the correct bay and pointed to the product.

Then we did the same thing again with… an iPhone.

  • Nils scanned one of our floor QR codes
  • iOS opened a Zappar app clip (no app installed)
  • The app connected to the Auki network and loaded Gotu (different from Cactus)
  • He asked: Where is Nils? and followed the AR guidance to his desk

Same shared map. Three very different “bodies”:

  • A humanoid
  • A small store robot
  • A regular smartphone

All browsing the real world web.

The Road Ahead

To recap what we’re aiming for:

  1. Build the real world web – the external spatial nervous system
  2. Enable the app store for the real world – multiple apps per venue, from many devs
  3. Focus on copilots first – phones and glasses before full autonomy
  4. Become the world’s largest robot distributor – because we make spaces robot-ready
  5. Build the OS for embodied AI – once we have distribution and partners
  6. Run on 100 billion devices – each one burning a little $AUKI every day as it moves through the world

This week was a big milestone on that path: a humanoid navigating a “new” venue via our network, a small robot and a phone sharing the same map, and a clear story for how we go from copilots to robots at scale.

If you want to see Terri do it live, come find us at WOW Summit in Hong Kong. And as always, if you want the off-record bits, join us in Discord.

Watch the whole update on X.

アウキ・ラボについて

Aukiはポーズメッシュという地球上、そしてその先の1000億の人々、デバイス、AIのための分散型機械認識ネットワークを構築しています。ポーズメッシュは、機械やAIが物理的世界を理解するために使用可能な、外部的かつ協調的な空間感覚です。

私たちの使命は、人々の相互認知能力、つまり私たちが互いに、そしてAIとともに考え、経験し、問題を解決する能力を向上させることです。人間の能力を拡大させる最も良い方法は、他者と協力することです。私たちは、意識を拡張するテクノロジーを構築し、コミュニケーションの摩擦を減らし、心の橋渡しをします。

ポーズメッシュについて

ポーズメッシュは、分散型で、ブロックチェーンベースの空間コンピューティングネットワークを動かすオープンソースのプロトコルです。

ポーズメッシュは、空間コンピューティングが協調的でプライバシーを保護する未来をもたらすよう設計されています。いかなる組織の監視能力も制限し、空間のプライベートな地図の自己所有権を奨励します。

分散化はまた、特に低レイテンシが重要な共同ARセッションにおいて、競争優位性を有します。ポスメッシュは分散化運動の次のステップであり、成長するテック大手のパワーに対抗するものです。

アウキ・ラボはポスメッシュにより、ポーズメッシュのソフトウェア・インフラの開発を託されました。

Keep up with Auki

ニュース、写真、イベント、ビジネスの最新情報をお届けします。

最新情報