Bots run rampant on social media, and most people can’t tell the difference anymore

COPENHAGEN, Denmark — There was a time when it was more or less easy to spot fake profiles on social media platforms like Facebook or Twitter. Images were usually pirated from elsewhere and easily traced, and the posts were almost always poorly written in a robotic fashion. Fast forward to today, however, and the AI boom of 2023 is blurring the lines between real social media accounts and bots more difficult than ever before.

Scientists from Copenhagen Business School conducted an experiment in which 375 participants were asked to differentiate between real and fake social media profiles. The fake accounts were created and maintained by some of the latest AI technology. The ensuing results were eye-opening. Subjects largely could not tell the difference between artificially generated fake Twitter accounts and real ones. Moreover, participants actually tended to erroneously think the AI accounts were less likely to be fake than the genuine ones.

The research team set up their own mock Twitter feed that focused specifically on the war in Ukraine. That feed featured both real and generated profiles with tweets supporting both sides of the debate surrounding the conflict. The AI profiles used computer-generated synthetic profile pictures created via StyleGAN. Meanwhile, AI posts were generated by GPT-3, the same language model that is behind ChatGPT.

“Interestingly, the most divisive accounts on questions of accuracy and likelihood belonged to the genuine humans. One of the real profiles was mislabelled as fake by 41.5% of the participants who saw it. Meanwhile, one of the best-performing fake profiles was only labeled as a bot by 10%,” says Sippo Rossi, a PhD Fellow from the Centre for Business Data Analytics at the Department of Digitalization at Copenhagen Business School, in a university release.

“Our findings suggest that the technology for creating generated fake profiles has advanced to such a point that it is difficult to distinguish them from real profiles,” he adds.

“Previously it was a lot of work to create realistic fake profiles. Five years ago the average user did not have the technology to create fake profiles at this scale and easiness. Today it is very accessible and available to the many not just the few,” explains study co-author Raghava Rao Mukkamala, the Director of the Centre for Business Data Analytics at Department of Digitalization at Copenhagen Business School.

Artificial Intelligence (AI)
Image by Gerd Altmann from Pixabay

The rapid advancement of AI and deep-learning holds major implications for all of society, but the potential malicious uses of these technologies on social media is especially concerning. From political manipulation and misinformation to cyberbullying and cybercrime, it isn’t an exaggeration to say bad actors may have some seriously powerful AI tools at their disposal very soon. That is, if they don’t already.

Authoritarian governments are flooding social media with seemingly supportive people to manipulate information so it’s essential to consider the potential consequences of these technologies carefully and work towards mitigating these negative impacts,” Raghava Rao Mukkamala adds.

Study authors made use of a simplified setting for this project; participants were shown just one tweet, and the profile information of the account that posted it. Moving forward, future studies should focus on if bots can be correctly identified from a news feed discussion featuring both fake and real profiles commenting on a news item in the same thread.

“We need new ways and new methods to deal with this as putting the genie back in the lamp is now virtually impossible. If humans are unable to detect fake profile and posts and to report them then it will have to be the role of automated detection, like removing accounts and ID verification and the development of other safeguards by the companies operating these social networking sites,” Sippo Rossi continues.

“Right now my advice would be to only trust people on social media that you know,” Rossi concludes.

The study is published in Hawaii International Conference on System Sciences (HICSS).

Follow on Google News

About the Author

John Anderer

Born blue in the face, John has been writing professionally for over a decade and covering the latest scientific research for StudyFinds since 2019. His work has been featured by Business Insider, Eat This Not That!, MSN, Ladders, and Yahoo!

Studies and abstracts can be confusing and awkwardly worded. He prides himself on making such content easy to read, understand, and apply to one’s everyday life.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer

Comments

  1. The easiest way to discover if the poster is a human or a ai Chatbot is to hover your mouse pointer over the name, if it has thousands of posts, it’s an Ai Chatbot.

    In my humble opinion, Fox message boards are plagued the most with ai Chatbots, I believe that 90% of all user comments are ai Chatbots, and Fox doesn’t care to remove them.

    1. //The easiest way to discover if the poster is a human or a ai Chatbot is to hover your mouse pointer over the name, if it has thousands of posts, it’s an Ai Chatbot. //
      Hi Sam,
      This is not necessarily so. On the boards that I chat on, there are many of us who have thousands of posts, and we are all real people. But we post a lot of different news stories and tons of Bible verses, and have been doing so for years.
      AI is totally creepy, though.

    2. The easiest way to discover if a poster is a bot is if the go-to position, regardless of the topic, is to criticize Fox or “Faux” News. Bots are programmed to parrot far left wing sites like MSNBC or CNN.

  2. So, the “Study” created fake accounts, and then claims that “bots run rampant on social media”. Basically, creating the “proof” for their claim.
    I am a real person with tweets that “the left” do not like and I am accused of being a bot all the time. The term “bot” has become synonymous with “account that I disagree with” similar to how all leftists accuse the right of being “n@zis”.
    TL;DR: BullSh@t study.

  3. Bots should be banned. One of the services the user expects is to chat with a real person and debate current issues. To be manipulated by a bot is not part of the deal. At least a disclosure under forum rules should state the level of bots in that forum.

  4. Too many people couldn’t tell the difference years ago. My hypothesis is that people are being made into worse and worse listeners. Listening requires hearing the other person’s voice, which is impossible in a text or on your feed. The young and the woke only hear the trigger words they can react to.

  5. Actually, pretty easy to spot a “bot” – if they rabidly support the leftist agenda and President, they are bots, or the mindless equivalent.

Comments are closed.