
Greetings from Hong Kong! We are back in the lab firing on all cylinders after a busy week of travel. This week brings a surge of public speaking engagements as we continue to push the narrative on the future of the machine economy. In addition to a series of podcasts in Shenzhen and a Twitter Space with the Bit Angels network, our CEO is delivering a lecture to 18 Swedish CEOs on the strategic differences between the US and Chinese AI landscapes.
But the real excitement is what’s happening on the ground—and in the air—with our spatial computing infrastructure.
We are seeing tremendous momentum in agriculture with Budbreak's Mars Rover-style vineyard robots. Word on the street is they have received a significant offer for a fundraise. This is a massive narrative victory for us, proving that the real world web extends far beyond retail into securing the global food supply.
To accelerate these agricultural deployments, we are exploring drone integration. By using a drone to do a first-pass scan of a vineyard, we can help the rover understand where it should and shouldn't navigate. This reduces deployment time and allows the rover to navigate more intelligently around obstacles. We are incredibly excited about the potential of combining drones, the Budbreak rover, Auki, and GEODNET all in one implementation.
As we take on more use cases, we are fielding calls from various robotics teams—from autonomous forklift companies to major OEMs. In fact, "one of the world's largest OEMs is offering us some hardware in exchange for equity in our company." If you are an OEM making hardware that fits our go-to-market strategy, we want to talk.
Our infrastructure continues to mature. This week, we officially open-sourced the splatter node. This node takes images from phones or drones and turns them into 3D representations, stitching together multiple recordings when there are overlapping markers.
By open-sourcing this technology, anyone can fork it, improve it, and get involved. Soon, our Domain Management Tool (DMT) will send workloads directly to these nodes, allowing the community to farm rewards by setting up domains and sending them for reconstruction.
We are constantly improving how we turn 2D data into 3D environments. One historical challenge in scene reconstruction is featureless areas—like a plain white door—which often result in sparse or missing point clouds.
To solve this, our team has integrated monocular depth estimation (using MOGA) directly into our reconstruction pipeline. By feeding monocular depth as a prior, we generate significantly denser and more accurate point clouds. While this new method can sometimes result in slightly blurrier renders in data-rich areas, the overall structural accuracy of the reconstruction—especially in featureless zones—is vastly improved.
We are committed to making sure we can "do better and better reconstruction and better and better rendering, better and better semantic understanding from the data that is gathered from the devices."
It’s going to be a very exciting year as we continue to onboard more devices to the real world web across a growing number of verticals.
Auki is making the physical world accessible to AI by building the real world web: a way for robots and digital devices like smart glasses and phones to browse, navigate, and search physical locations.
70% of the world economy is still tied to physical locations and labor, so making the physical world accessible to AI represents a 3X increase in the TAM of AI in general. Auki's goal is to become the decentralized nervous system of AI in the physical world, providing collaborative spatial reasoning for the next 100bn devices on Earth and beyond.
X | Discord | LinkedIn | YouTube | Whitepaper | auki.com