Should computers be less perfect? Scientists adding human doubt to artificial intelligence

CAMBRIDGE, United Kingdom — Is AI just a little too perfect for its own good? Human error and indecision are inherent in our daily choices, yet many artificial intelligence systems struggle to factor in these very human traits. Recognizing the imperfection of this perfection, a team of researchers from the University of Cambridge is pioneering ways to embed this human sense of uncertainty into AI programs.

Many AI systems, especially those that receive feedback from humans, operate under the presumption that humans are always accurate and certain in their decisions. Real-life decisions, though, are often dotted with mistakes and doubt.

Researchers are now exploring ways to reconcile human behavior with machine learning. Their work aims to bolster trustworthiness and diminish risks in AI-human interfaces, particularly in high-stakes domains like medical diagnostics.

By modifying an established image classification dataset, researchers enabled human participants to specify their uncertainty levels while labeling images. Their findings revealed that while systems trained with these uncertain labels improved in handling doubtful feedback, the inclusion of human feedback sometimes caused the performance of these AI-human systems to dip.

The AI arena views “human-in-the-loop” machine learning systems, which allow human feedback, as a way to mitigate risks in areas where automated systems alone might be inadequate. But how do these systems react when their human collaborators express doubt?

“Uncertainty is central in how humans reason about the world but many AI models fail to take this into account,” says Katherine Collins, the study’s first author from Cambridge’s Department of Engineering, in a university release. “A lot of developers are working to address model uncertainty, but less work has been done on addressing uncertainty from the person’s point of view.”

robot
(Credit: Alex Knight from Pexels)

In scenarios where errors have minimal consequences, such as mistaking a stranger for a friend, human uncertainty is inconsequential. However, in safety-sensitive applications, human uncertainty can be perilous.

“Many human-AI systems assume that humans are always certain of their decisions, which isn’t how humans work – we all make mistakes,” says Collins. “We wanted to look at what happens when people express uncertainty, which is especially important in safety-critical settings, like a clinician working with a medical AI system.”

“We need better tools to recalibrate these models, so that the people working with them are empowered to say when they’re uncertain,” adds study co-author Matthew Barker, who recently completed his Masters of Engineering degree at Gonville and Caius College, Cambridge. “Although machines can be trained with complete confidence, humans often can’t provide this, and machine learning models struggle with that uncertainty.”

Using different datasets, including one that involved human participants distinguishing bird colors, the research team explored how uncertainty affects final results. They noted that replacing machine decisions with human ones often led to a sharp decline in performance.

“We know from decades of behavioral research that humans are almost never 100 percent certain, but it’s a challenge to incorporate this into machine learning,” says Barker. “We’re trying to bridge the two fields, so that machine learning can start to deal with human uncertainty where humans are part of the system.”

Research unveiled multiple challenges in blending humans into machine learning systems. To drive further exploration, the team is making their datasets publicly available, hoping that future work can incorporate human uncertainty more effectively into AI models.

“As some of our colleagues so brilliantly put it, uncertainty is a form of transparency, and that’s hugely important,” explains Collins. “We need to figure out when we can trust a model and when to trust a human and why. In certain applications, we’re looking at a probability over possibilities. Especially with the rise of chatbots for example, we need models that better incorporate the language of possibility, which may lead to a more natural, safe experience.”

The study was presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES 2023) in Montréal.

You might also be interested in:

YouTube video

Follow on Google News

About the Author

StudyFinds Staff

StudyFinds sets out to find new research that speaks to mass audiences — without all the scientific jargon. The stories we publish are digestible, summarized versions of research that are intended to inform the reader as well as stir civil, educated debate.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer

Comments

  1. Interesting.
    Do you think AI will actually be honest about things humans lie about, such as the actual biological damage that current wireless causes to human and animal/plant life on a cellular and behavioral level?
    Huh.

Comments are closed.