The Apple Vision Pro launched on February 2nd, 2024, and I feel like it represents an inflection point in the XR industry where the screen resolution is high enough for it to be used as a viable screen replacement, the eye-tracking combined with pinch gestures is a paradigm shift in human-computer interaction reaching new levels of speed and intuitiveness to get into productivity flow states, and the software ecosystem of 600 visionOS apps with over one million 2D iOS and iPadOS apps makes a really compelling argument that this is a proper spatial computing device and not just the most advance virtual reality / mixed reality headset launched so far. The ergonomics and weight distribution for both default straps are inexcusably terrible in my own experience, and there's a growing consensus that they're suboptimal for most people with a minority of people feeling one of the straps are completely adequate. I expect DIY fixes and third-party strap solutions to start to fix this existing gap in comfort. At $3500, you'd expect to get something that feels better than most VR headsets on the market, but this is not the case.

I had a chance to catch up with Road to VR co-founder and executive editor Ben Lang, who has had a number of demos in 2023, and has actually had early review access to the Apple Vision Pro for more than a week now (his full review should be landing either this week or next). We actually talked to each other using our Personas in Apple Vision Pro while recording external audio since I still need to still figure out how to record within the headset. But we share a lot of our first impressions of the Apple Vision Pro, some early technical feedback, and elaborate on many of the comparisons to Meta's Quest 3.

Lang elaborates on how usability and ecosystem integration are some of the more qualitative differences that go beyond any of the quantitative specifications that ultimately creates a qualitatively different holistic experience that transcends anything that the Quest ecosystem has been able to achieve. Lang makes the pithy insight that it's the ergonomics that is the biggest bottleneck for more use and adoption of this device, and that at launch there are already enough things to see and do from a productivity perspective that the bottleneck for XR is no longer a matter of software or retention, but rather that the weight distribution is simply uncomfortable for extended use. The $3500 baseline cost is of course a barrier for the general public, but like many other Pro devices, most people will be writing off the Apple Vision Pro as a business expense if they can functionally use it within the context of their business (which many XR developers are using it a developer kit).

But what has been the most surprising to me is how much utility this device can have out of the box -- caveat being that you really do need a Bluetooth keyboard and potentially a Magic Trackpad or Mac Laptop to truly use it the full capacity of a spatial computing device that replaces or augments your existing devices. The holistic integration of the Apple ecosystem is one of the biggest value propositions for people who are already either fully committed or even partially invested into the Apple ecosystem. I'm only half-way committed with an iPhone and iPad, but without a Magic Trackpad, MacBook Pro laptop, Airpods (for headphones since there is no headphone jack for the Apple Vision Pro), and I use Google Drive instead of iCloud. Even my Logitech K830 Bluetooth keyboard had some Bluetooth connectivity glitches that made me wonder if I should just get Apple's keyboard. But I won't be able to reach the full potential of the Apple Vision Pro until I more fully commit to all of Apple's ecosystem. I made a couple of Linkedin posts with more first impressions here and here.

There are a lot of annoying operating system and software bugs in this first gen launch, but I expect all of this to get smoothed out over time.