
Nils called in from Hong Kong — back in the lab after two months on the road — with a packed update on Cactus product velocity, new robot partnerships, and a research direction that could change how stores get scanned.
The Cactus team shipped two major features showcased at internal Demo Day:
That second one is significant. Most planogram software works in flat 2D. Building it in 3D, with heat map overlays coming soon, means retailers can see how valuable the shelf real estate is while they're planning it.
"We've realized that we can very favorably compete with a couple of unicorns in the retail tech sector that are having a difficult time replicating what we're doing. So we're just steaming ahead."
The robot pipeline is accelerating:
"We have finally hit the inflection point where people are starting to throw robots at us."
The biggest potential partnership came from an unscheduled meeting at the Düsseldorf retail expo with one of the world's largest cleaning robot companies — already the #1 provider in Germany with tens of thousands of robots deployed globally. After a demo, the headquarters representative's response was simple: "I'm in. Let me connect you to the CTO right now."
Robin shared early research into removing the dependency on ARKit for store scanning. The problem: AR mode on iPhones limits camera resolution. Without it, you can capture 48-megapixel images (iPhone 14+), making it practical to detect small things like barcodes from much further away.
The trade-off is losing on-device AR tracking. The solution: offload reconstruction to a nearby compute node on the same Wi-Fi, using the existing domain reconstruction to anchor everything into the same coordinate system.
Robin showed the first-ever demo of this approach — a custom iOS app feeding individual frames to a server building a 3D point cloud in real time. Early stage, but the building blocks are in place.
This lines up well with a tower-robot shelf scanner company from the Düsseldorf expo that's visiting Hong Kong in March to explore a partnership.
Auki is launching Papers and Beers — an event series inviting local researchers to present their work informally over pizza and beer. The first event is this Friday, with over 50 signups, exceeding expectations.
The timing is deliberate: with six robots in the lab and growing community infrastructure, Auki is creating opportunities for researchers to earn rewards building open-source tools on the posemesh.
Also worth noting: at Düsseldorf, the domain reconstruction was processed entirely by community nodes on the network. Conference footage was uploaded and automatically distributed to posemesh participants — a live proof-of-concept that made at least one robotics company say "wow."
Auki is making the physical world accessible to AI by building the real world web: a way for robots and digital devices like smart glasses and phones to browse, navigate, and search physical locations.
70% of the world economy is still tied to physical locations and labor, so making the physical world accessible to AI represents a 3X increase in the TAM of AI in general. Auki's goal is to become the decentralized nervous system of AI in the physical world, providing collaborative spatial reasoning for the next 100bn devices on Earth and beyond.
X | Discord | LinkedIn | YouTube | Whitepaper | auki.com