Bad computer! Even when humans are in control, they blame AI assistants for mistakes

MUNICH, Germany — Fully autonomous vehicles powered by futuristic and impressive AI systems are now capable of navigating traffic without any human input at all. Of course, the vast majority of cars on the road these days still have a human behind the wheel, and in many instances, that human driver is relying on directions or navigation provided by an AI assistant. Now, researchers from Ludwig Maximilian University of Munich are revealing who people see as responsible when something goes right or wrong. When a driver gets lost using GPS, is it the fault of the human behind the wheel, or the AI system that is helping them navigate?

Generally, the results suggest a complicated relationship regarding how humans assign responsibility and blame while driving with an AI helper. Even though most people see AI-based assistants purely as tools, they still assign partial responsibility for decisions to the computer.

“We all have smart assistants in our pockets,” says study leader Dr. Louis Longin, from the Chair of Philosophy of Mind, in a media release. “Yet a lot of the experimental evidence we have on responsibility gaps focuses on robots or autonomous vehicles where AI is literally in the driver’s seat, deciding for us. Investigating cases where we are still the ones making the final decision, but use AI more like a sophisticated instrument, is essential.”

A philosopher specializing in interactions between humans and artificial intelligence, Dr. Longin collaborated with his colleague Dr. Bahador Bahrami and Prof. Ophelia Deroy, Chair of Philosophy of Mind, to conduct this study. Together, the research team analyzed how 940 participants judged a human driver using either an AI-powered verbal assistant, an AI-powered tactile assistant, or a non-AI navigation instrument. Participants were also asked if they considered the navigation aid responsible, and how much they saw it as a “tool.”

Driver using GPS navigation app in the car
(Photo by Ravi Palwe on Unsplash)

The ensuing results, researchers say, detail a clear ambivalence toward these assistants. While the group strongly stated that smart assistants were just tools, they also described them as partially responsible for both successes and failures among the human drivers using them. This observed division of responsibility was not seen for the non-AI-powered navigation instrument.

Interestingly, study authors point out that smart assistants were actually seen as more responsible for positive outcomes than negative ones.

“People might apply different moral standards for praise and blame. When a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems,” suggests Dr. Bahrami, who is considered an expert on collective responsibility.

The research team discovered no difference between smart assistants using language and others that alarmed their users using a tactile wheel vibration.

“The two provided the same information in this case, ‘Hey, careful, something ahead,’ but of course, ChatGPT in practice gives much more information,” notes Ophelia Deroy, whose research examines humanity’s conflicting attitudes toward artificial intelligence as a form of animist beliefs.

Regarding the additional information provided by novel language-based AI systems like ChatGPT, Deroy notes, “the richer the interaction, the easier it is to anthropomorphize.”

“In sum, our findings support the idea that AI assistants are seen as something more than mere recommendation tools but remain nonetheless far from human standards,” Dr. Longin explains.

In conclusion, the study authors believe that their work will likely have a major impact on both the design and social discourse surrounding AI assistants going forward.

“Organizations that develop and release smart assistants should think about how social and moral norms are affected,” Dr. Longin concludes.

The study is published in the journal Cell.

You might also be interested in:

YouTube video

Follow on Google News

About the Author

John Anderer

Born blue in the face, John has been writing professionally for over a decade and covering the latest scientific research for StudyFinds since 2019. His work has been featured by Business Insider, Eat This Not That!, MSN, Ladders, and Yahoo!

Studies and abstracts can be confusing and awkwardly worded. He prides himself on making such content easy to read, understand, and apply to one’s everyday life.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer