Focus on UX Mitigates Driver Distraction

By Jeff LeBlanc

Director Of User Experience

Advanced Kiosks

February 26, 2019

Blog

Focus on UX Mitigates Driver Distraction

This article will explore the pros and cons of several common input technologies and demonstrate how employing best practices of UX design can help impel drivers to keep their hands on the wheel.

Abstract

According to the CDC, distracted driving caused an average of 9 deaths and 1,000 injuries

per day in 2018. Fortunately, user experience (UX) design can help mitigate some of the distractor factors that can lead to tragedy. This article will explore the pros and cons of several common input technologies (touch, gestures and voice interactions) and demonstrate how employing best practices of UX design can help impel drivers to keep their hands on the wheel and eyes on the road.

Let’s face it, we all do it.

You’re cruising down the highway and your phone buzzes with a text. You know you shouldn’t take your eyes off the road to see who it is, but you buckle under the weight of curiosity and steal a quick look.

Satisfied it’s not a message that needs an immediate reply, your focus turns back to the road. But then you hear the first notes of your favorite song -- that Sirius subscription is paying off! Again, you take your eyes off the road just for a sec because you have to turn it up. Singing along at full I’m-alone-in-the-car volume makes you thirsty, so once again, you take your eyes off the road as you reach for your iced coffee.

No harm, no foul, right? After all, you only gave into distraction for a few seconds.

But, that’s the problem.

According to the National Highway Traffic Safety Administration (NHTSA), every year traffic accidents account for nearly 1.3 million fatalities worldwide, with 94 percent of crashes caused by human error. In 2015, around 391,000 people were injured in automobile accidents specifically attributed to distracted driving, and the following year, more than 3,000 people died in distracted-driver-related accidents. (1)

Automobile manufacturers, while certainly concerned with driver and occupant safety, are in business to make a profit. That means their primary goal is to incorporate more and more technology into vehicles because that’s what consumers are asking for. Today’s drivers spend a lot of time in their vehicles. In Massachusetts, where Boston UX is based, AAA reports that the average daily round trip commute is a little over an hour. That time quickly adds up so it’s no wonder people want comfort and convenience, which in-cabin tech can provide.

But they want safety too. As designers, we have the expertise to make that (really cool) driver-cockpit tech less of a distraction, and in turn make our roads a little safer.

Roots of distracted driving

The term “distracted driving” is often used as a catch-all to describe instances of drivers failing to pay full attention to what should be their paramount task, guiding a moving vehicle safely down the roadway. In practice, being distracted from this task comes in several forms:

  • Visual distraction -- taking one’s eyes off the road
  • Manual distraction -- taking one’s hands off the steering wheel
  • Cognitive distraction -- taking one’s mind off the full task of driving

There is plenty of legislation already on the books that prohibits the use of cellphones while driving, both for texting and making calls. Some states go so far as to prohibit touching portable electronic devices at all while driving a moving vehicle in an effort to ensure hands and eyes remain engaged with the task at hand. However, the electronics that are part of the vehicle, such as screens in the center console, are ripe for applying well-designed solutions intended to reduce the distraction of secondary tasks.

Interface technology

When considering technology-based solutions in the car, there are four major avenues that are commonly explored and each provides a rich palette for designers to work with. These are:

  • Remote control
  • Touch
  • Gestures
  • Voice interaction

Remote control functionality means being able to complete tasks without the driver having to move his or her hands to the central control area. In practice, this means automakers move some hardware controls onto the steering wheel, for example, common tasks like adjusting radio volume, answering an incoming call or selecting a media-input source. However, for more complex tasks, like dialing a contact, additional solutions are needed.

In-vehicle touchscreens are a tremendous usability improvement. The dynamic nature of software makes possible an infinite number of control configurations, and accommodates enhancements and branding changes as often as the manufacturer desires. Automakers love driver-cockpit touchscreens because consumers are familiar and comfortable with touch technology as 9 in 10 adults in the United States own at least one touchscreen device.

But touchscreens aren’t the most advanced technology these days. In fact, they have about the same level of usability as older console configurations, where each function had to have a dedicated hardware button or knob associated with it. Since the touchscreens used in today’s vehicles have no affordances or haptic feedback to indicate that a driver pressed a virtual button, you usually have to look away from the road to make sure you hit the right target on the touchscreen.

Also, limiting their usability is something once considered a benefit: their flexibility. As touch technology has become mainstream, manufacturers have raced to add more and more functionality into them, which necessitates a menu system of some kind. But menus require operator focus -- meaning a visual distraction, plus some cognitive load to navigate the menus.

Fortunately, there are a few ways designers can improve the usability of these screens by applying user experience (UX) design best practices. Following the mobile design notion of “glanceability” allows us to craft our displays in a way that lets the eyes capture the key information at a glance and then get back to other tasks.

Other measures include respecting Fitts Law, which tells us we can improve usability by making touch targets larger so they’re easier to hit, and hit more quickly. Bigger targets let our sense of proprioception help us -- meaning we know where in space parts of our body are without having to look, such as when our hands are in front of a screen. This can be further enhanced by new techniques in manufacturing, say allowing for curved screens or even screens with physical buttons or knobs added. There has even been research on adding simple haptic feedback, such as slight vibration, to screens. (Though it's questionable how useful that will be in the in-cabin environment that already includes vibration caused by the road.)

That’s where gesture interaction comes in. Ever since the movie “Minority Report” was released in 2002, people have been fascinated with gesture-based user interfaces. Waving our hands in the air to make something happen seems almost magical. But thanks to advances in technology, what was once far-fetched Hollywood showmanship is now reality -- one made possible by computer vision techniques that interpret the moving of hands or fingers in the air and convert those gestures into commands a computer can recognize. Custom cameras, such as time-of-flight cameras, could view the area in front of the vehicle’s center stack, above where the gear shift is typically located, and pass along any hand motions for interpretation.

In terms of usability, this method lies somewhere between touch and voice interaction. According to research conducted at Spain’s University of Vigo (1), touch was considered more reliable than gestures, but gesture technology was qualitatively considered less distracting. Still, gestures have drawbacks. They require at least one hand to come off the wheel. And there’s some cognitive load to think about what gesture to make, since the driver has to remember what to do rather than recognize a control. For simple interactions, like changing music or volume, a gesture-based UI seems like a win. But going deeper into a menu system could prove challenging for a driver without looking at the display.

So what about voice interaction? This seems like the clear winner for reducing driver distraction. It’s 100 percent hands-free and, depending on the design, can allow the driver to bypass menus completely by using correct word choices to jump to any interaction in the system.

But, that actually presents a great challenge.

First a bit of context. There are two main aspects to any system that uses voice interaction. The first is Automated Speech Recognition (ASR), which involves capturing sounds detected by a microphone and having a computer realize these sounds as human speech. Industry analysts indicate that a number of technology companies have ASR working at 95 percent or better accuracy, which is right about the level of human-to-human speech.

The second aspect is going from recognized speech to an understood action or meaning, which is referred to as Natural Language Understanding (NLU). This is a notoriously hard problem, but advances in artificial intelligence and neural networks continue to chip away at it. Certainly, commercial voice control systems like the Amazon Echo and Google Home Assistant do an impressive job of serving as our own computerized agents about the home.

The reason those devices work as well as they do is the availability of cloud computing, which in turn relies on a fast network connection. In-home devices work great, as long as your internet is up. But when it’s down, not so much. When my internet connection goes down, my Alexa responds with “I’m having trouble understanding right now.”

That means, for voice to reach its potential in vehicles -- to get really good recognition -- the vehicle needs an excellent cellular connection, which can be costly. Otherwise, the speech experience will only be as good as the on-board vehicle computing resources can handle. (Developments like the Amazon Alexa Auto may quickly improve the voice experience.)

Less distraction is still distraction

With ASR and NLU issues in mind, we circle back to the question of driver distraction.  According to a number of studies, including one by the Insurance Institute for Highway Safety (2), voice is currently the least-distracting modality for accessing in-vehicle technology. But the operative word here is “least” as any distraction still means that a portion of the driver’s attention is not focused on the road. The cognitive distraction posed by voice in its present state is similar to that of gestures since a driver has to remember what commands to use rather than recognize the options on a visual screen. This cognitive distraction increases in situations where the ASR or NLU do not work well, as this leads to driver frustration.

A study conducted by researchers at the University of Utah (3) classified distractions on a five point scale. Simple distractions, like listening to music or even a text played by a text-to-speech system, rated a 2 on the scale. More complex, voice-based interactions, such as composing a message or dialing a contact in your address book, clocked in at a 3. Very complex interactions, such as composing a message to a contact that required additional recollection or math, was a level 5 distraction. Even using Siri generated a surprising level 4 distraction.

So, while using voice systems to interact with your technology might keep your hands and eyes on the road, they still impose some level of cognitive distraction to the driver.

For the consumer, the solution here (at least with today’s tech) is to limit use of all the cool tech automakers are stuffing into today’s computers on wheels -- at least while moving -- to essential tasks. After all, do you really need to send that complicated text to your accountant and scan for your favorite podcast while cruising at 65 mph?

For automakers, the solution is to incorporate intuitive user interfaces that support glanceability and include large touch targets, and add in features of ASR, NLU and other technologies as advancements are made. As gesture and voice interaction techniques mature, cognitive loads will decrease, especially on the simpler, less urgent in-vehicle tasks, making them less distracting.

Until self-driving autos render the distracted driving problem a thing of the past, it's up to automakers to choose cockpit technology judiciously, and to rely on good UX interface design to get us safely home.

---------------------------------

References

 

1 [NHTSA] https://www.nhtsa.gov/risky-driving/distracted-driving

 

2 [PARA] “Hand Gestures to Control Infotainment Equipment in Cars”, Parada-Loira et al, 2015

 

3 [IIHS]  https://www.iihs.org/iihs/sr/statusreport/article/50/2/1

 

4 https://newsroom.aaa.com/2013/06/think-you-know-all-about-distracted-driving-think-again-says-aaa/

 

Jeff LeBlanc is Director of User Experience for both Boston UX and ICS. He heads the creative team. With an engineering degree from Worcester Polytechnic Institute (where he's also an adjunct professor), he's an expert at bridging the gap between design and development. What makes his day? Applying human factors principles to UX design. Oh, and 3D-printing a wearable Iron Man suit.

As a trained engineer and current director of user experience, I bring a wide variety of skills to any project I work on. I focus on solving problems in ways that address the needs of all stakeholders on a project, including business, technology and usability. As an experienced instructor and technical trainer, I focus on imparting knowledge to my students in ways that will be both conducive to long term learning and immediately useful in their daily lives. For fun, I'm avid 3D printing hobbyist

More from Jeff

Categories
Software & OS