January 28, 2020
Share

We get dozens of digital notifications every day. From text messages to calendar reminders, privacy permission requests to software updates, our reactions are near-automatic each time: dismiss, snooze or remind me later. Why do we let these important notifications pass us by?

“Our brains are wired to tune things out over time,” says Anthony Vance, director of the Center for Cybersecurity at the Fox School. It often has to do with memory. “We saw it last time, so we don’t have to scrutinize it so much this time,” he explains. “Sometimes we remember something more than we actually see it.” 

Unwilling to Update 

Vance, who studies cybersecurity as an associate professor in the Management Information Systems Department, is concerned with how quickly people are willing to swipe away important security notifications. Unfortunately, it’s a habitual behavior we all learn as soon as we are old enough to use a computer or tablet. For example, pop-ups explaining a new software update are not usually intrusive enough to stop us from finishing our email. 

“The thing is, software updates fix security vulnerabilities that hackers know about and can take advantage of,” says Vance. “As soon as Apple or Microsoft publishes these security updates, the whole world knows what needs fixing. Hackers start writing attacks to take advantage of these holes.” That means the longer we wait to update our computers or phones, the more susceptible our devices are to hacking. 

Changing What We See

To investigate how to stop us from ignoring important updates, Vance and his colleagues experimented with changes in the design of security warnings. They added visuals like a triangular yellow “warning” symbol, a red background, a “jiggle” animation when the warning appears and a dynamic zoom that made the warning increase in size. 

In the first part of their experiment, Vance tracked users’ reactions to the varied designs of notifications through fMRIs and eye-tracking. Their work was unique in that they tracked the changes in these reactions over the course of five days, while most fMRI experiments only capture a single session. The design changes seemed effective. “Those treatments sustained attention across that whole week,” explains Vance, meaning that the users were less likely to ignore those warnings, even when repeatedly exposed to them. 

The researchers followed up this lab experiment with a field study. Over 100 participants were recruited to evaluate apps on, unbeknownst to them, a fake Android app store. “These were people using their mobile devices in everyday life,” says Vance. “We measured their actual behavior. Out of a list of ten apps, they were asked to download, install and evaluate three.” The researchers randomized the permission warnings—as well as the visual displays—on each app. The warnings ranged from the innocuous, like connecting to the internet, to the outrageous, like “record microphone audio at any time.” The participants needed to decide whether or not to risk downloading the app.  

“The people who saw the variations in warning designs had more secure behavior over time,” says Vance. “These designs are more resistant habituation, which is tuning things out.” By the end of the three-week study, nearly 80 percent of those who saw messages that change their appearance in dynamic animations were still adhering to safe-security behavior, compared to only 55 percent of those who saw static warnings. 

Vance and his research team shared their findings in their paper, “Tuning Out Security Warnings: A Longitudinal Examination of Habituation Through fMRI, Eye Tracking, and Field Experiments,” published in MIS Quarterly last summer. 

Designing for Better Behaviors 

Vance says that we should not be too hard on ourselves for failing to notice important warnings: “It’s not entirely our fault. Our brains are working against us to make good security decisions.” 

However, this research demonstrates there is a clear opportunity to change behavior. “Employers and designers of software need to be aware of this and design their systems so that it works with the way the brain works and not against it,” says Vance. Based on this research, he suggests using innovative and novel designs to ensure that users are taking note of important notifications—like asking people to use unique swiping patterns or smash virtual glass with a hammer. 

Vance and his team received a grant from the National Science Foundation to continue this research; the next step will be understanding how habituation to messages generalizes across platforms. For example, your phone gets notifications all the time and you learn the automatic response. “But when you get a rare security message, even if it’s on a different platform, we respond similarly with the same instinctive ‘dismiss,’” explains Vance. “Our question is: With all these notifications that we’re barraged with, do they make us less able respond to a rare security message that actually matters?” 

As more systems become automated and intelligent devices more prevalent, these human decisions—what to ignore and what to act on—become more important. “The danger is that this can desensitize us to things that really matter,” Vance warns. With more research, Vance hopes we can move away from automatic reaction to notifications and become more conscious in how we can protect our cybersecurity.