November 21, 2025

Auki community update recap: Nov 21, 2025

Why Our First 500 Robots Won’t Fold Laundry

Checking in from San Francisco

This week’s community update came from San Francisco instead of our usual Hong Kong lab. The topic was very on-brand for this city: robots, money, and how to put a lot of robots to work very quickly.

If you’re new: we’re building the real world web, our network that makes physical venues browsable, searchable, and navigable to phones, glasses and robots. The idea is simple: Let devices share what they see, so robots don’t have to figure everything out alone.

Today, most robot perception is locked inside each robot. With Auki, perception becomes collaborative.

A Robot With No Camera That Still Knows the Store

We showed a small but important demo: a brand-new robot, without a camera, that can still guide you to products in a supermarket it has never seen.

The setup flow was intentionally boring:

  1. Open the app
  2. Scan the QR code on the robot base
  3. Scan another QR code

That’s it.

Behind the scenes:

  • The phones in that store had already been using our AI copilot for retail (Cactus) to do work.
  • Those phones had built a 3D map and product layout and uploaded it to the network.
  • When we scanned the marker on the robot’s base, it connected to the existing domain and pulled down:
    • The store map
    • The product locations

We searched for popcorn. The robot, with no camera of its own, already knew where to go and drove straight there. To summarize: “The setup process was literally open the app, click away the notifications, scan a QR code, scan another QR code.”

The point isn’t the specific robot – this isn’t the final hardware we’ll deploy. The point is that any compatible robot can become “store-aware” in minutes if the venue is already on the real world web.

We already have 1,000+ locations using our phone-based Cactus. Now we can drop robots into those same stores and have them work on day one.

What a $100/Day Robot Actually Does

We’re planning to lease robots to retailers at around $100 per day. For that price, each robot will do three main jobs – and none of them require advanced manipulation.

1. Nightly shelf audits (without the glare problem)

At night, the robot will:

  • Drive the store autonomously
  • Capture camera data of the shelves
  • Use its arm only to bring a barcode scanner up to each price tag

This solves two big problems:

  1. “Where do I even start?”
    One of Europe’s largest non-grocery retailers told us their staff lose 30–60 minutes every day just figuring out where to begin. The robot’s overnight run lets us generate an AI task list for the morning shift: what’s wrong, what’s missing, where to fix it.
  2. Existing scanner towers fail more than you think
    Competing “camera tower” robots already exist. The cheaper ones are around $30/day, the more advanced ones up to $130/day, and retailers are paying those prices.
    Their weak spot is glare: “The tower robots actually have a pretty high failure rate… the price tags have glare on them.” By bringing a barcode scanner physically close to each tag, we dodge the glare issue entirely.

So just doing accurate shelf inspection is arguably worth that $100/day on its own.

2. In-store customer care that actually knows where things are

The second job is customer guidance. You’ll be able to walk up to the robot in the aisle and ask it things like:

“Do you have a vegetarian alternative to this?”
“Where’s the low-fat version?”
“Do you have ketchup?”

Because the robot is integrated with the real world web (3D layout and product locations), and plugged into the store’s inventory systems, it doesn’t have to hallucinate. It knows what the store actually carries and where it lives, down to the specific spot on the shelf.

There’s real money here too. US research suggests up to 6% of shopper baskets would contain one more item if the staff were more knowledgeable about where products are.

A robot that always knows where everything is, and can physically walk you there, is a quietly powerful sales tool.

By the way: we’re aiming to have an end-to-end humanoid demo soon where you can say “Do you have ketchup?” and it will:

  • Navigate to the right aisle
  • Walk you there
  • Point at the exact spot on the shelf in 3D

All powered by Auki.

3. Remote brand reps riding along as “telespectators”

The third job surprises people outside retail: remote brand audits.

Today, big brands send field reps in cars to visit stores, walk the aisles, and check:

  • Do we have the shelf space we paid for?
  • Are our products placed correctly?
  • Is anything out of stock or mispriced?

One example Nils gave: “One of the world’s largest beverage companies has over 400 field reps just in Sweden.”

Those reps have been asking retailers for camera data so they can stay off the road. The retailers mostly say no, for understandable reasons: privacy, GDPR, and the fact that raw video stored in their backend is sensitive.

We’ve found a workaround: “It’s totally okay for the brand to be in there recording. They can take pictures in there.”

So we give brands remote access to the robot instead:

  1. Brand rep logs into the robot from home.
  2. Says: “Take me to the Coca-Cola shelf.”
  3. The robot drives over.
  4. The rep inspects the shelf remotely and grabs whatever screenshots they need.

Benefits:

  • Fewer cars and fewer hours on the road
  • Audits can be done at night when the store is closed
  • In theory, you can staff this globally, not just locally

This is appealing enough that two of the world’s biggest brands have already expressed, "Oh, if this works well, I think we’re interested in paying for the entire robot so that the store just accepts having it.”

In other words, in some cases the brand might pay to place the robot in the store.

Why We’re Doing “Mobile Cognition” Before Fancy Hands

If you look at American home robots like 1X Neo or Sunday, they all focus on manipulation in the home: folding laundry, loading the dishwasher, vacuuming. We love those robots. But manipulation is hard.

We pointed out that Sunday’s impressive demo runs up to 10x speed. “If you do the math you see that it’s folding one pair of socks every two minutes or so. Manipulation is difficult.”

Our strategy is different:

  • We focus first on mobile cognitive labor – perception and navigation plus reasoning.
  • No delicate object handling required to deliver value.

“We’re focusing on getting the robots to do mobile cognitive labor first, because the AI is much further along for cognitive labor. It just can’t move around and perceive the world.”

The three jobs above are exactly that: seeing, understanding, deciding, and moving, not finely manipulating.

Chinese Robots, Open Stacks, and $20M of Hardware

We also talked about hardware and why we mostly work with Chinese robot makers right now:

  • They ship robots today.
  • Their robots are more open and programmable, so we can integrate them with the real world web without begging for SDK access.
  • In those relationships, we are the customer, and the retailer is our customer.

With most American robotics companies, they want to:

  • Keep the robot as a closed product
  • Own the entire customer relationship end-to-end

That makes deep integration harder.

To have real influence, we’re going to put skin in the game: “We are looking to spend around 20 million dollars on robot hardware in 2026… If we really want to impact how the robot companies think about the future and their own roadmap, we need to be their biggest customer.”

The idea is simple: if working with us makes robot OEMs more successful, they’ll opt in to our network and roadmap.

From 500 Robots to Hundreds of Thousands

Our concrete targets:

  • By end of 2026:
    • At least 500 robots in the wild
    • Doing real, profitable work, not beta tests or “just data collection”
  • In 2027:
    • Start the journey towards 10,000 robots deployed

And beyond that, Nils thinks it's possible to deploy "somewhere between 100,000 and 500,000 of these robots, just to grocery retail… possibly before the end of 2030."

That’s just grocery. The same model works in:

  • DIY and home improvement
  • Electronics and big-box retail
  • Pharmacies
  • Airports, conferences, train stations, and more

We’re already making millions in revenue on the phone side of the network. The robots plug into that same fabric.

Being in San Francisco and talking to labs and investors here, the reaction has been encouraging:

“You have to solve simpler problems than we’ve been trying to fix, you’re going to get paid more than we are trying to get paid, and you’re going to be able to sell more of them in a shorter time frame than we are.”

We’re not trying to be the coolest demo robot. We’re trying to be the most deployed, the most useful, and the easiest to plug in.

And for now, that starts with robots that don’t fold your laundry – they just quietly help run the real world.

Watch the whole update on X.

アウキ・ラボについて

Aukiはポーズメッシュという地球上、そしてその先の1000億の人々、デバイス、AIのための分散型機械認識ネットワークを構築しています。ポーズメッシュは、機械やAIが物理的世界を理解するために使用可能な、外部的かつ協調的な空間感覚です。

私たちの使命は、人々の相互認知能力、つまり私たちが互いに、そしてAIとともに考え、経験し、問題を解決する能力を向上させることです。人間の能力を拡大させる最も良い方法は、他者と協力することです。私たちは、意識を拡張するテクノロジーを構築し、コミュニケーションの摩擦を減らし、心の橋渡しをします。

ポーズメッシュについて

ポーズメッシュは、分散型で、ブロックチェーンベースの空間コンピューティングネットワークを動かすオープンソースのプロトコルです。

ポーズメッシュは、空間コンピューティングが協調的でプライバシーを保護する未来をもたらすよう設計されています。いかなる組織の監視能力も制限し、空間のプライベートな地図の自己所有権を奨励します。

分散化はまた、特に低レイテンシが重要な共同ARセッションにおいて、競争優位性を有します。ポスメッシュは分散化運動の次のステップであり、成長するテック大手のパワーに対抗するものです。

アウキ・ラボはポスメッシュにより、ポーズメッシュのソフトウェア・インフラの開発を託されました。

Keep up with Auki

ニュース、写真、イベント、ビジネスの最新情報をお届けします。

最新情報