COLUMBIA, Mo. — As much as we disavow bigots and trolls on social media, we don’t believe that their soapbox should be completely seized and their hateful speech be censored, a new study finds.

Researchers at the University of Missouri conducted a handful of focus groups, hoping to discover what different demographics thought of the place of “extreme speech” — crude, sexist, violent, and racist language — in the social media discourse.

Social media apps, Facebook, Twitter
A new study finds that while most social media users are against hateful speech, very few would approve of sites censoring inflammatory language and expressions of hate.

To foster a comfortable and uncumbered discussion, the researchers split participants — who were all regular social media users — into one of four groups, based on their gender and race (African-American men, African-American women, white men, white women). The researchers chose this method so that participants didn’t feel conflicted in sharing an honest opinion openly with the fear of offending someone of another race or gender.

Most participants, regardless of background, felt that speech shared on social media should either have a clear purpose behind it, or serve as a means of expression.

In addition, participants expressed dislike toward offensive language on social platforms, emphasizing that sites need to be more transparent about how they manage and promote certain content, along with how they prevent vulnerable users from being targeted.

These sentiments were particularly strong among the females interviewed.

However, despite their strong views, few participants actually believed that popular social media sites took an active approach to censoring inflammatory speech.

Although many public figures have called for the curbing of such rhetoric, the groups’ participants made nothing close to an unequivocal demand for a ban on hate speech, the researchers noted.

“While the focus groups did not reveal an outright demand to censor extreme and offensive speech, we found a prevailing trend of participants calling for social networking sites to have clear and transparent policies related to permissible content,” explains lead researcher Brett Johnson in a university press release. “Sites should communicate their policies clearly to users and frame speech policies as a means of promoting healthy public discourse rather than pledging to keep users safe from harmful speech.”

Since many social media sites have been flummoxed by an influx in controversial content, including fake news, Johnson hopes that his team’s research can help demystify how to approach extreme speech.

The study’s findings will be published in the journal Internet Research later this year.

About Daniel Steingold

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink


Chris Melore


Sophia Naughton

Associate Editor

1 Comment