UI/UX for Spatial Computing on Smart Glasses

05/11/2024
We’ve been hands-on with Apple Vision Pro since day one, and we’re now fully convinced: this platform is a shift, not a gimmick.
Our team at ANML had the opportunity to design one of the first apps available on Vision Pro. Some of us were skeptical at first. That changed fast once we started building. When you get the experience right, Vision Pro feels like the future.
Even with its early price tag, Apple moved over 200,000 units in the first 10 days of preorders. The momentum is there. This is just the beginning.
Here’s what we’ve learned so far.
It’s tempting to go big. More windows, more content, more to look at. But with Vision Pro, more isn’t better. It’s just overwhelming.
We explored layouts that used multiple windows across your field of view. It got cluttered fast. Changing your view meant repositioning everything manually. That friction adds up.
The best experiences we built were clean, clear, and purposeful. Simple always wins in spatial.
The Vision Pro simulator is useful, but it won’t give you the full picture. You don’t feel the fluidity. You can’t test responsiveness. You’re designing blind.
We found that many of the issues we thought we solved in the simulator still showed up on the actual device. If you're serious about building for spatial, you need access to the hardware. It's the only way to know what you’re really creating.
This was a big one. Vision Pro’s eye tracking is impressive—and touchy. If you pack too many interaction points into a 3D volume, you’ll run into problems. Unintended actions, visual noise, and accidental focus states show up fast.
Pair that with gesture control and the experience can go sideways quickly. We learned to reduce inputs, simplify interactions, and prioritize comfort.
Apple introduced three main ways to show content: windows, volumes, and spaces. Each one works differently. Some are better for reading, others for interaction or immersion.
Knowing when to use which one makes all the difference. It’s not just about visual design. It’s about choosing the right spatial container to match the user’s context.
Placing interface into physical space changes everything. Depth, lighting, safety, accessibility—it all matters more now.
We tested custom glass treatments and opaque designs, trying to push the platform a bit. But Apple’s default glass is smart for a reason. It adjusts to light, preserves depth, and stays readable in different environments.
Even small details, like where users are looking or how they turn their heads, have big design implications. It's not just about the screen anymore. It's about how it lives around you.
Entertainment will get the headlines. But the biggest value may come from helping people work, learn, and move through complex tasks in new ways.
We see potential in training, real-time guidance, hands-free procedures, and spatial visualization. Not as gimmicks—but as tools that make hard tasks easier and more intuitive.
This is only the start. We’ll be sharing more about the app we’ve been building and how we approached it in the coming months.
If you’re thinking about how to bring your product into spatial computing, we’d love to help. It's a new frontier—and we’re ready to build what’s next.
AI products have to account for uncertainty, transparency, and trust.
We design interfaces that explain system behavior clearly, help people interpret outputs, and make intelligent features feel intuitive in everyday use.