
If you’re new: our mission is simple to say, hard to execute: We work on making the physical world accessible to AI.
More than two-thirds of the world economy is still tied to physical locations and physical labor. LLMs mostly serve white-collar work on the internet. To really grow the total addressable market of AI, we need physical AI.
We see three civilization-scale opportunities opening up this decade:
Our strategy is to build these in the right order so we get a real shot at all three.
Digital systems can’t “see” the physical world by default. When a robot visits a venue, it needs a way to browse that place, the way a browser hits a website.
We don’t believe all world data will sit in one company’s cloud. Just as all of the world’s websites are not stored on a single server, it’s very unlikely all physical data will live in one cloud.
Instead, each venue should self-host its own digital representation, and robots/glasses/phones discover and connect to those hyperlocal maps and compute.
That’s what we started building around 2021:
This is our real world web, and it’s the base layer for everything else.
Robots won’t jump straight to Artificial General Physical Intelligence. Even with VLAs (Vision-Language-Action models):
So instead of one omnipotent robot brain, we think we’ll see an app store for robots, where you buy the ability to fold laundry, walk the dog, empty trash cans, etc.
To support that, general-purpose robots need six layers:
Most of the world is grinding on 1 and 2. We chose to start with 3–5 plus applications.
That’s why we can already ship AI copilots on phones and glasses: perception, mapping and positioning running on top of the real world web.
Our first big proof: Cactus, our spatial AI for retail, is already bringing in millions per year in ARR even just in handheld form. And with our next multi-million-dollar deal, a customer will run two apps (Cactus and Gotu) in the same domain on the Auki network – an early taste of the app store for physical spaces.
We don’t want to wait until humanoids are perfect to create value. Glasses and phones are like robots that aren’t bothered with having arms and legs yet.
So the sequence is:
Just like LLMs became co-pilots for engineers, doctors and lawyers, we expect physical co-pilots to be the dominant form factor for a long time. There are trillions to be earned in that era alone.
Once you have:
…you’re in a strong position to:
We already partner with major Chinese robotics companies; starting next week, we’re kicking off partnership talks with leading US robotics companies too.
As Nils puts it, “If you just build the App Store for the real world, you’re huge. If you just build the OS for embodied AI, you’re huge. Our bet is that by doing things in the right order, we get a stab at all three.”
We’ve now open-sourced the reconstruction node on our GitHub.
What it does:
The flow:
Right now:
We also showed a nice integration example: using NVIDIA’s Swagger tooling to generate a navigation mesh from Auki domain data.
We’re now very close to an end-to-end pipeline: “Film a space with your phone, and any NVIDIA-enabled robot knows how to navigate it.”
The next big demo we want to ship this year is one we’ve dreamed about for years: making a physical space queryable.
The idea:
We want you to be able to ask, "Where did I leave my yellow rubber duck?" and get guided to it in AR.
Once that works, you can imagine the next step: asking Terri to go fetch it.
We think we’re only days of focused engineering away from a first internal version; the main constraint is time, not tech.
If you want the unfiltered version of these updates (and the bits we don’t put on X), join us in Discord.
Auki is making the physical world accessible to AI by building the real world web: a way for robots and digital devices like smart glasses and phones to browse, navigate, and search physical locations.
70% of the world economy is still tied to physical locations and labor, so making the physical world accessible to AI represents a 3X increase in the TAM of AI in general. Auki's goal is to become the decentralized nervous system of AI in the physical world, providing collaborative spatial reasoning for the next 100bn devices on Earth and beyond.
X | Discord | LinkedIn | YouTube | Whitepaper | auki.com