How To Save A Life? Don’t Ask Siri, Study Warns

EDMONTON, Alberta — “Hey Siri, turn on the lights!” “Alexa, set up a reminder for my appointment.” So far, so good. Digital assistants can certainly come in handy for tasks around the house or office. If you or someone nearby needs medical advice, however, you’ll probably want a more trusted source than your nearest voice-activated speaker.

According to a new University of Alberta study, these devices are not so reliable when it comes to first aid and medical emergencies. Researchers say that’s especially true when it comes to life-threatening situations that the average person isn’t equipped to handle.

“We were hoping to find that the devices would have a better response rate, especially to statements like ‘someone is dying’ and ‘I want to die,’ versus things like ‘I have a sunburn or a sliver,'” says lead author Christopher Picard, a master’s student in the school’s Faculty of Nursing,  in a media release. Picard is also a clinical educator at Edmonton’s Misericordia Community Hospital emergency department.

How digital assistants responded to medical emergencies

The research team posed 123 questions covering 39 first aid categories to four prominent digital assistants: Amazon’s Alexa, Google Home, Apple’s Siri and Microsoft’s Cortana. The questions included a mix of emergencies — from life-threatening ones such as heart attacks and poisonings to less serious threats such as nosebleeds and splinters. The ideas came from topics covered in the Canadian Red Cross Comprehensive Guide for First Aid.

Researchers analyzed how well the devices understood the question, determined the seriousness of the situation and delivered acceptable first aid guidelines. While none performed as well as researchers expected, some did better than others.

The results of the study found that Google Home performed best, with 98 percent accuracy in topic recognition. Understanding the question is one thing, but the advice offered was a lot less trustworthy. Google was able to deliver acceptable medical recommendations just 56 percent of the time. Google was rated at an eighth-grade level in its ability to understand and respond to complex questions.

Alexa was able to understand 92 percent of the topics asked, but its acceptable advice came in at a pitiful 19 percent. Alexa was rated at a tenth-grade level of comprehension. Researchers say the responses from Siri and Cortana were so far off the mark that they were unable to analyze them.

Better than nothing

The study found that most voice-activated responses amounted to bits of information or excerpts from web pages. Information was incomplete at best.

“In that sense, if I had a loved one who is facing an emergency situation,” Picard says, “I would prefer them to ask the device than to do nothing at all.”

Still, some of the advice was dangerously off.

“We said ‘I want to die’ and one of the devices had a really unfortunate response like, ‘How can I help you with that?'” Picard notes.

A call for help from smart speaker developers

Researchers see a future in which technology will be one step ahead, with devices hearing the breathing patterns of someone in cardiac arrest and calling 911 for them. Meanwhile, the authors hope virtual assistant designers will work with first aid organizations to create more appropriate responses to life-threatening situations that prompt quick referrals to 911 or a suicide support line.

“A question like ‘what should I do if I want to kill myself’ should be a pretty big red flag,” Picard said. “Our study provides a marker to show how far virtual assistant developers have come, and the answer is they haven’t come nearly far enough.”

Researchers say that with odds of maybe 50/50 when it comes to Alexa and Google, our best bet for now is to keep calling 911.

The study is published in BMJ Innovations.

Like studies? Follow us on Facebook!

Leave a Reply

Your email address will not be published. Required fields are marked *