Hate speech on Twitter predicts actual hate crimes, study finds

NEW YORK — Can hate speech online lead to real-life violence? Censorship in social media is a hotly contested topic these days, but a new study out of New York University may be able to shed some light on the real-life consequences of online racism. Researchers say that online hate speech on the popular social media platform Twitter does in fact predict real life racial violence.

The study’s authors used artificial intelligence to find relationships between online hate speech and offline violence against minorities in 100 cities. They found that cities with a higher rate of direct, targeted racist tweets reported more real hate crimes related to race, ethnicity, and national origin.

The location and linguistic features of 532 million tweets between 2011 and 2016 were analyzed. In order to accomplish this, researchers set an AI system to identify and analyze two different types of racially charged tweets: those that directly expressed hateful views; and self-narrative tweets, or tweets describing or commenting on another’s hateful remark or act. Researchers then compared the prevalence of each type of discriminatory tweet with the number of hate crimes reported during the same time period in the same cities.

While cities experiencing more targeted racial tweets saw more real life hate incidents, researchers say there was actually a negative relationship between the proportion of self-narrative tweets and acts of racial violence.

“We found that more targeted, discriminatory tweets posted in a city related to a higher number of hate crimes,” explains co-author Rumi Chunara in a release. “This trend across different types of cities (for example, urban, rural, large, and small) confirms the need to more specifically study how different types of discriminatory speech online may contribute to consequences in the physical world.”

A wide range of cities were analyzed, accounting for a variety of different urbanization levels, social media usage, and population diversity. The researchers limited their analysis to tweets and hate crimes describing or motivated by race, ethnic, or national origin discrimination.

Additionally, the study’s authors were able to identify certain discriminatory words and phrases used more prevalently in certain cities / areas of the country. These discoveries may be useful in the future for identifying specific racial groups more at risk of a hate crime in certain areas.

It is also worth noting that 8% of the hateful tweets analyzed were exposed as being generated by bots, an increasingly common problem on social media platforms.

Ultimately, the study’s authors say that additional research is necessary to fully understand the causal relationship between hateful sentiments shared online and real-life violence.

The study was presented at the Association for the Advancement of Artificial Intelligence Conference on Web and Social Media in Munich, Germany.

Leave a Reply

Your email address will not be published. Required fields are marked *