The Apple Vision Pro has had a very long gestation period. The first evidence for it emerged in 2015 when Apple acquired Metaio and Mike Rockwell was hired. That’s almost 9 years ago. Even at this time, the product is in many ways incomplete and will take some years to develop into its potential.
So why is this taking so long? How does it differ, it at all, from other product developments at Apple? And what was the decision process for the product? Was it different than other Apple products? Now that we have the product to use, it’s possible to hypothesize answers to these questions and examine how Apple is evolving in its own long-term trajectory.
After using the product for a few weeks my observation is that the development of the Vision Pro appears to have been an engineering-led heavy lift rather than a design-led puzzle solving. To suggest such a distinction between engineering and design is perhaps obvious but it’s not so obvious for Apple. Historically the two disciplines were blended imperceptibly or forged into a whole by management. Not without difficulty but forged nonetheless. I therefore think that this effort is a departure.
For the Vision Pro there were significant measurable performance requirements such as resolution, frame rates, tracking (eye and hand) accuracy, response times, weight/size, and power consumption which all needed huge leaps forward. Orders of magnitude. None of these were good enough for those nine years and some are still not good enough, especially for mass adoption. This is before considering the software and ecosystem which need to be built for an entirely new experience. All of these breakthroughs are against hard physical (biological) constraints. Engineering is all about balancing physical constraints, making tradeoff decisions in pursuit of some optimum. The Vision Pro was hard to develop because it requires all these inter-related constraints to be balanced.
In contrast, design decision making has to consider purpose. It leads to more of the human and environmental (economic) factors: what is the user trying to do? What are the limitations of the person using it and what are the circumstances of the usage? What should be the goal? It’s more a question of why than how. Design is answering the Job-to-be-done question whereas engineering is answering the how-to-get-it-done question. Apple’s leadership in product over the years was due to their insight into both of these questions. As Jony Ive would say: saying no to a thousand good ideas and focusing only on what mattered, based on keen insight into the human condition. Another way of saying that a product needs to justify its existence.
Design is not just how a product looks or feels. It’s how the user discovers it and thus why it exists.
The purpose of the Vision Pro is however all too brief: to enable a new human/computer interface. The very premise that created Apple: make computers easier to use and thus make them more useful and more used. The developments of silicon and optics and batteries and communications of the last few decades have suggested that there was a leap possible beyond the prevailing touch interface. Multitouch itself was a leap from the trackpad/mouse which was a leap from the keyboard input method. All these leaps cause Apple to surge forward, delivering a sequence of futures in a way that made them absorbable by many if not most.
In addition, the Vision Pro is not a product that the user needs to look at while using it. It’s completely invisible, having no physical presence to the user. You look through it rather than at it.
Therefore there is no question of “Why” or “How to use it”. These are self-evident. Consider the user’s input. The interface of looking at something and touching fingers together is so direct that there is no need to learn it. You rather need to unlearn the “computer” interface to use it.
Now consider the output: The canvas to paint on is not a rectangular screen with rounded corners and perhaps a pill-shaped cutout. It’s the entire world. It’s all that you can see. It’s not in front of you. It’s all around you. The place you use it is not in your hand, on a desk or a slice of time. It’s anywhere and anytime. In other words, we don’t need to have–as there was with the phone–“a conversation” with the user to discover what is missing and what can be fixed. If anything, the design surface is everything and everywhere.
The premise of spatial computing is that computing is consciousness itself. All that you see, everywhere you are, with no friction of “interaction” or “input/output”. There is no “user interface”. There is only space, natural and synthetic.
So it follows that this product is different. The Vision Pro is a project to develop not just a new computer but a new way that computers are used.
Before going into how exactly, let’s recall that when the phone was made mobile it made calling a person possible. Before the mobile phone, calling meant calling a place. To call a person meant guessing where they would be. Going to calling a person, and not just a place, meant all people were callable and also all people could be callers. Once data was provisioned to the phone then all people could not only consume but they could also publish. Anything and always. Which they did, for good and bad.
The Apple Vision Pro is aiming to do even more. It’s saying that computing is not something you initiate and terminate. Or that you do in a place. With a phone you stop, look, act and then go back to what you were doing before. Spatial Computing is ambient. It’s wearable. The Apple Watch is also an ambient computer but it has limited output. It’s so limited that it’s essentially consumed with a single glance. The Vision Pro is the opposite. Every photon you see, it generates. To avoid it, you close your eyes.
For this reason, the classical questions of design are moot, or at least they are moot on the device. They become relevant at the app layer. The device is bionic. It’s defined by biology not consciousness. The questions of what to paint on the canvas the user sees, i.e. the world, is left as an exercise to the developer.
The product strategy, go-to-market and all the details we are witnessing related to launch and packaging are a byproduct of this essential distinction. The development process, the ecosystem questions, the price points. They are all what they are because this is such a giant leap forward. It must be understood for its profundity and that will not be quick or easy. It’s unintuitive really. Rather like Special Relativity was when Einstein proposed it. It is still unintuitive today because it makes sense only on cosmic scales (and the speed of light.)
So I propose that those who might want to consider this new spatial computing era should call it something else: spacetime. The time spent in spatial computing but also this new era we are entering.
Welcome to spacetime.
Discover more from Asymco
Subscribe to get the latest posts sent to your email.