Powering Sensory Design With AI

If you saw Tony Stark’s holographic interface in the Marvel movies, you’ve experienced spatial computing — arguably the next big thing in cutting-edge design. And if you aren’t a superheroes fan? Chances are you’ve seen this technology in other areas of design. Whether you realize it, though, is another question entirely.

Spatial computing uses the space around us as a canvas for digital experiences. It enables people to use voice, sight, gestures, and other natural interactions to directly connect with information — eliminating the need to think and behave like a computer in order to be understood.

Taking design and creativity to the next level

Many view spatial computing as more than just another fad, but a way to remove the input devices that currently intermediate our digital interactions — think keyboard, mouse, or touchscreen — and make these interactions more human.

“The human body’s capacity to move freely is empowering,” says Albert Hwang, an artist, designer, and developer who focuses on the physical 3D space. “But the human brain’s ability to model movement doesn’t stop at the ends of our fingertips. Our neuroplastic brains incorporate physical objects so that they become part of our bodies. So our capacity to communicate relies not only on our ability to control our bodies but our ability to control our environments.”

AI is the driving force of sensory design

Sensory design, a new design language that moves design beyond the confines of the screen and create natural, emotionally engaging experiences that flow freely into the environment, will be largely built on sensory data — for example, computer vision, machine hearing, and machine touch. This requires AI-powered sensory machines that can output data like your camera’s exact location, dimensions of space around you, and identify objects, people, speech, and biodata.

Then comes the task of turning this data into a more intuitive, human experience. AI plays a central role here, interpreting sensory data and combining it with surfaces and contextual needs in a way that mimics human perception.

Armed with this technology, the host of designers, engineers, architects, and others that will use sensory design to create will be unlimited in their ability to design world that is multisensory, engaging, and natural.

The following are four ways AI can play a role in helping sensory designers fulfill spatial computing’s promise of making digital interactions intuitively human.

1. Start by understanding AI

We’ve all heard a lot about the threat of AI or how it is an incomprehensible black box. But we are still responsible for its effects on society. So let’s create black boxes that are designed ethically, accountable to independent policies, and upheld by transparent corporate controls.

I encourage every designer to face their fears and at least learn the terminology of AI. You don’t need to learn to code, but you should learn how to speak to engineers. Instead of worrying about becoming a full-fledged AI practitioner, simply be willing to ask what is possible, what AI can do, what it can’t do, and what should be done with AI.

Designers play a critical role in this regard. With all the power that AI represents, the designer is the one to ensure that spatial computing is developed in such a way that it positively influences humanity. For this reason, I encourage designers to understand and openly discuss the opportunities and the problems that AI brings.

The truth is, engineers need designers in this process just as much as designers need them.

Designers are trained to understand human behavior and how to design for it. Engineers, on the other hand, can code, but they don’t focus on understanding how to apply it or what to use it for. Sitting side by side, engineers and designers can succeed in spatial computing — as long as they can speak the same language.

2. Design AI to make spatial computing more effective

Spatial computing will require a massive amount of personalization and designed content — much of it beyond what we can imagine right now — but, most importantly, it needs to be more productive and effective than the alternative.

For example, in many AR-enabled phones, AI technologies can detect basic human emotions like anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise from your facial expression.

But a better, more effective estimate of emotion should take into account other contextual cues, such as the task the person is doing, the environment they are in, what they are saying, and their body’s physiological cues such as heart rate. People take all of these cues into account to understand how they are feeling or the feelings of others. Machines can do the same.

3. Use AI to democratize creative tools

Previously, my team built a tool that records the movement of an object — like a bird flying or a friend dancing — through a phone’s camera, then instantly it transfers that movement to a 3D object or a 2D design. To see the wonder on people’s faces as they used the magic of spatial computing technologies deeply for the first time was incredible. The tool then went on to influence other Adobe’s product features.

We all need to make creative expression simpler, more natural, less tied to devices, and more tied to our human senses. In many ways, spatial tools will democratize creativity by opening up its capabilities to anyone.

Today, the difficulty of animation causes it to be left to professionals with access to specific tools, but I feel spatial computing is going to make that limitation a thing of the past. And designers will play a huge role in ensuring that AI-powered spatial computing tools ultimately speak to what is most human in all of us.