Is artificial intelligence more trustworthy than humans when it comes to personal info?

YouTube video

UNIVERSITY PARK, Pa. — Could artificial intelligence be the key to limiting privacy issues and identity theft for companies in the future? A study by researchers at Penn State University found that people are more likely to trust machines with their personal information than other humans.

The findings seem to fly in the face of people’s general distrust of computers and artificial intelligence.

The study found that of the 160 participants, those who trusted machines were much more likely to give their credit card numbers to a computerized travel agent than a human travel agent. According to study author S. Shyam Sundar, co-director of the Media Effects Research Library and professor of media effects at Penn State, this bias towards machines being more trustworthy and secure than people — also known as the machine heuristic — could be behind the findings.

“This tendency to trust the machine agent more than the human agent was much stronger for people who were high on the belief in the machine heuristic,” Sundar explains in a university release. “For people who did not believe in the machine heuristic, it didn’t make a difference whether the travel agent was a machine or a human.”

This tendency suggests the presence of a machine agent on the online travel agent interface was a cue that triggered the belief that machines are better at protecting information than humans. The faith in machines that Sundar’s study revealed could be triggered, the researchers hypothesized, by the belief that the machines don’t gossip or have any private plans for what to do with the private information they collect.

But Sundar is quick to warn people that, while machines don’t have any personal ulterior motives, the people behind such artificial intelligence might also have access to personal information or have the opportunity to extract personal information from unwitting users.

“This study should serve as a warning for people to be aware of how they interact online,” says Sundar. “People should be aware that they may have a blind belief in machine superiority. They should watch themselves when they engage online with robotic interfaces.”

The study was presented at the ACM CHI Conference on Human Factors in Computing Systems in May 2019.

Leave a Reply

Your email address will not be published. Required fields are marked *