With smartphones clearly in the top of their product lifecycle S-curve (see image below), the marketplace is trying to figure out what the next dominant user interface (UI) will be that drives the next growth curve in technology. The combination of the smartphone supply chain and advancements in machine learning have opened up several new platforms gaining traction in human-computer interaction: wearables, hearables, AR, VR, voice UI, and/or some combination of all these.

Mobile phone s-curve

Source: Mobile is eating the World, Benedict Evans

Removing Friction

The combination of these technologies is removing a great deal of the friction involved with interacting with machines to get things done. To quote Benedict Evans, VC at Andreesen Horowitz, “No buttons, apps, or intermediate steps, no ‘computer stuff’ between you and the service – just use it (hopefully)”.

Frictionless computing

Source: Mobile is eating the World, Benedict Evans

 

The 1st Round of Smartwatches

Many people, particularly in the world of wearables, thought the next UI was going to be the smartwatch. Just look at many of the industry analyst estimates on wearables growth – the majority of this growth was expected to be in smart watches. However, smartwatches have not reduced friction in user experiences and in some cases have created more friction, which is a big reason why they have not grown at the levels many expected. In general, the first round of smartwatches have replicated what was done on the smart phone (even down to the app paradigm), but didn’t do it as well: everything your phone does, just on a smaller screen that’s harder to navigate and interact with? Not hugely compelling.

The Voice

Contrast that with the huge growth in voice UI, mostly in the form of Amazon Alexa, Apple Siri and Google Home devices. Amazon sold 8.2 million Alexa devices in 1.5 years (vast majority of that was US/UK only, just recently available in more markets). Apple’s Siri Google voice assistant are in hundreds of millions of devices around the world. According to one report, Siri handles 2 billion voice commands per week.

You’re already starting to see people use voice UI to do many of the most common things they do with their phones or computers, as you can see from this research:

Most common tasks for voice UI

Source: Creative Strategies, The Voice UI has Gone Mainstream

 

What this means for wearables and hearables

Obviously, voice UI requires a device with connectivity, AI/machine learning engine, processing power, sensors, and much more depending on the use case. However, as more people get accustomed to speaking to their devices as the primary interface, they will expect that capability to serve those same functions when they are away from a stationary Amazon Echo/Google Home/etc. Almost all hearables not only have a means of communication (microphone, in-ear speakers, etc.), but increasingly have a variety of sensors including motion sensors, biometric sensors, and noise control (hearing augmentation) features that can provide valuable context to a voice assistant. In particular, sensor fusion from hearables can instill more “humanity” in mobile voice assistants, because hearable sensor information can communicate: 1) that you are the one who’s actually speaking (by sensing mouth motions) and 2) your “emotional tone” (allowing AI to learn the best way to respond to you at any given instant).

In other words, hearables can do more than just replicate your smart phone capabilities. It can enhance and extend the experience – similar to how smart phones enhanced and extended the capabilities of mobile phones. 

However, advancements in voice UI is just one of the trends that are driving growth in hearables, now expected to be an $8B market in 2020.

We’ll cover those additional trends driving the hearables market in our next post.