The History Of User Interfaces—And Where They Are Heading

The History Of User Interfaces—And Where They Are Heading

“A funny thing happens when you design a computer everyone can use.”

That was the headline of a 1984 print campaign for Apple’s newest device, the Macintosh. At the time the company was kicking off a revolution in personal computing with its graphical user interface (GUI) and mouse—two innovations that helped democratize computing by making computers understandable and approachable for the average consumer.

During the ’90s and early 2000s, however, continued innovation stalled. But as the 2010s approached, a UI renaissance began that has led to many powerfully disruptive—and, frankly, more human—forms of interaction.

So where are we headed as the planet continues to digitize? To figure that out, we must start with a look at how we have interacted with computing to date and how those innovations have created the foundation for the newest forms of interaction.

1960s-1980s

An IBM PC print ad from the 1980s showcased a—wait for it!—keyboard.

For much of the late 20th century, the keyboard dominated human interaction with computing technology. Mice yet, touch interfaces, voice control, and the like didn’t exist. Still, the keyboard was a vast improvement over the punchcard, which had been used to program early computers in the 1940s and 1950s. But something happened in the early 1980s to dramatically change the environment …

1984

Steve Jobs famously visited Xerox Parc in 1979, and the inspiration he found there in the form of a GUI and mouse is now the stuff of legend. In fact, these two UI innovations created a seismic event in technology adoption. Apple sold 1 million Macintoshes by 1988. IBM, Compaq, and others quickly followed with their own computer mice.

Microsoft, of course, followed this wave with the introduction of Windows 1.0 (in 1985), but it wasn’t until Windows 3.1, in 1992, that it began to feel the GUI tailwind.

Apple introduced the Macintosh with a simple GUI and mouse, fundamentally changing the computing landscape.

1994-1997

As the 1990s began, the laptop computer started to overtake the desktop. Along with that came incremental changes in the mouse/keyboard interface. Apple started incorporating trackballs and trackpads into its Powerbook laptops, while IBM introduced pointing sticks (branded “TrackPoint”) into its laptops.

While computing continued miniaturizing from the desktop to the portable PC, a new device was also becoming popular: the PalmPilot. Along with the handheld form factor, Palm introduced a a new user interface, the stylus, which worked with its touchscreen, and an alternative alphabet it called “Graffiti.”

The PalmPilot introduced users to handwriting recognition through a stylus.

Voice also started to become a form of interaction with the 1997 introduction of Dragon NaturallySpeaking. Dragon (and later Nuance, which acquired Dragon in 2000) sold several million copies of its voice interaction software, though it wouldn’t be until the introduction of today’s generation of voice assistants that voice would truly begin to take off as a form of computing interaction.

Dragon Systems introduced voice recognition and dictation to millions of users.

Early 2000s

UI development was mostly incremental during the early 2000s, improving on already-established devices and tools:

The many iterations of the iPod music player demonstrate how Apple continued to push the boundaries of UI design during the early 2000s.

https://main--bacom-blog--adobecom.hlx.page/blog/fragments/products/experience-cloud/experience-cloud

2007-2010

The latter part of the 2000s witnessed a big leap forward in UI design, again led by Apple. Touch interfaces took off dramatically with the introduction of the iPhone in 2007 and the iPad in 2010. For both these devices, multipoint capacity touch enabled users to interact with digital content in new ways. While Apple didn’t invent this UI (it came from Fingerworks, which it acquired in 2005 to provide this), the company certainly democratized it, with other smartphone and tablet manufacturers all adopting forms of touch over the following years.

Apple introduced the world to touch interfaces with the iPhone and iPad.

2011-Present

While touch has become a ubiqitous part of the way we interact with digital content today (not just smartphones and tablets, but kiosks, ATMs, appliances, etc.), a revitalization is developing in voice as a user interface form.

Voice assistants started to gain steam in 2011 with Apple’s Siri voice assistant. Since then we’ve also had the introduction of Google Now (2012) and Amazon’s Alexa devices (2014). These services and devices depend on data and content assets acquired by these platforms to fullfill user requests.

Thus, when you ask Siri for directions, it can quickly leverage Apple Maps to provide a routing. Or when you ask an Amazon Echo (powered by Alexa) to play a song or read an Audible book, Alexa draws on those Amazon assets to play back your content.

Amazon’s Alexa-powered devices support the latest generation of voice interface.

Immersive experiences enabled by a new generation of virtual and augmented reality devices add another dimension to user interfaces. Consumers and enterprises are just dipping their toes in the water around new forms of interactivity in gaming, architecture, real estate, engineering, and social networking, just to name a few examples.

Today this means wearing a rather large head-mounted device and using a mouse or gaming controller to interact with virtual objects. But as haptic devices—which allow for the perception and manipulation of objects using touch—reach new levels of performance and miniaturization, one can imagine being able to touch, walk, run, and interact with virtual worlds in the same natural way we interact with the real world.

The state-of-the-art for immersive UI generally looks like this today (above)…

Haptic suits like, this one from Hardlight VR, enable users to interact more naturally and receive feedback from a virtual environment directly on their bodies.

Bringing Humanity Back To Computing

If you step back from the history we’ve just walked through, you will see that UI design is heading in a clear direction: toward organic forms of interactivity that are native to our biology.

Looking back, we can see that keyboards, mice, and trackpads were really just hacks meant to close the gap between spoken language and touch with computing surfaces. And they worked well for the better part of 30-plus years.

Now we are on the edge of a return to more natural interfaces that involve our fingers, voices, and bodies in space. This is all made possible by advances in network infrastructure, cloud-based computing storage and computational power, and aggregation of the data needed to teach machines to understand and interpret our human interactions.

These advances open up all sorts of interesting possibilities:

We’ve identified just a few of the exciting possiblities that might be unlocked by continuing advances in UI technology. As we bring more humanity back to computing, we will also be able to engage more of the population who no longer need to master typing or the complexities of a desktop interface to interact with digital content.