
We originally formed in 2019 to do shared AR for Warhammer. The first prototype came from community member phlogios, and the big technical headache was simple to say but hard to solve: “How can we get two devices to have a shared understanding of where they are in space down to the centimeter?”
GPS doesn’t work indoors, and it’s nowhere near precise enough. So we invented instant calibration: one phone shows the AR scene, the other just scans the screen and snaps into the same coordinate system in a second. That result helped us close about $20M in seed funding.
Then we asked: if AR glasses and robots are coming, what does the internet need?
We describe what we’re building like this: “The real world web allows digital devices to browse physical locations, making them navigable, searchable and accessible to AI.”
Instead of every robot or app maintaining its own private map, venues host their own “domains”:
This is what you saw at WOW Summit: a humanoid arriving at a venue it had never seen, spotting a marker, downloading the domain from the network and navigating on its own. The map wasn’t baked into the robot – it was externalized.
We think general-purpose embodied AI needs six layers:
Most humanoid companies are stuck on 1–2. We started at 3–5, because a pair of glasses is really just a robot with no arms and legs, so is a phone.
Phones and glasses already have the sensors; that lets us deploy physical AI copilots now, and bring robots in later using the same domains and apps.
Phones – Cactus
Cactus lets retailers:
Early results: up to 40% less walking for click-and-collect staff. We’re already going live in 1,000+ locations, with millions in pilot revenue and an open pipeline > $150M.
Glasses – Mentra Live
We’re working with Mentra to give their open, programmable smart glasses spatial awareness via domains. Think: “Every white collar job got an AI copilot. We believe the same will happen for physical labor.”
Front-of-house reports an issue; warehouse staff get AR directions straight to the right shelf, even on day one.
Robots – Plugging In
Special-purpose robots and humanoids connect to the same domains and apps that phones and glasses use. We’re already integrated or integrating with Unitree, EngineAI, PadBot, Slamtec, and others.
Medium term, our six-step plan is:
Most mapping companies collect your data and build a central map. We don’t. You don’t sell your data to us. We don’t get your data at all.
Venues and community nodes run the hardware and keep the maps. Our $AUKI token wires the economics together with a burn–credit–mint loop:
We see this as robot money for a shared spatial network, not a bolt-on token.
Auki is making the physical world accessible to AI by building the real world web: a way for robots and digital devices like smart glasses and phones to browse, navigate, and search physical locations.
70% of the world economy is still tied to physical locations and labor, so making the physical world accessible to AI represents a 3X increase in the TAM of AI in general. Auki's goal is to become the decentralized nervous system of AI in the physical world, providing collaborative spatial reasoning for the next 100bn devices on Earth and beyond.
X | Discord | LinkedIn | YouTube | Whitepaper | auki.com