Study: Artificial intelligence ‘learns’ human bias, stereotypes

PRINCETON, N. J. — Despite preconceived notions of artificial intelligence systems being purely logical and objective, new research shows that machines can hold the same biases as humans do.

The study, conducted at Princeton University, demonstrated that machines’ learning systems can develop a wide range of cultural biases by “observing” embedded patterns of billions of words from human language posted online.

AI (artificial intelligence)
A new study finds that artificial intelligence can learn human biases and stereotypes simply by observing various nuances of human language.

Solving this problem is important as society starts turning to computers for processing human communication, such as online text searches, image categorization, and automated language translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” says Arvind Narayanan, an assistant professor of computer science at Princeton, in a university press release. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The researchers launched an experiment using a program called “GloVe” that functioned as a machine-learning version of the Implicit Association Test, an exam that measures the response times of humans when asked to pair word concepts shown on a computer screen. Previous research has shown that response time is significantly less when asked to pair words that are similar as opposed to ones that are not.

When the scientists had GloVe analyze more than 840 billion words from an assortment of online content, they discovered that it managed to replicate biases found in the Implicit Association Test, such as pairing female names with familial words and male names with words associated with careers.

The program was also able to pick up racial biases — associating African-American names with negative concepts and European American names with less offensive material. Even more trivial mindsets, such as associating flowers with pleasant words and insects with negative ones, were identified.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” says Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

The study’s findings were published in April in the journal Science.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *