COPENHAGEN, Denmark — There was a time when it was more or less easy to spot fake profiles on social media platforms like Facebook or Twitter. Images were usually pirated from elsewhere and easily traced, and the posts were almost always poorly written in a robotic fashion. Fast forward to today, however, and the AI boom of 2023 is blurring the lines between real social media accounts and bots more difficult than ever before.
Scientists from Copenhagen Business School conducted an experiment in which 375 participants were asked to differentiate between real and fake social media profiles. The fake accounts were created and maintained by some of the latest AI technology. The ensuing results were eye-opening. Subjects largely could not tell the difference between artificially generated fake Twitter accounts and real ones. Moreover, participants actually tended to erroneously think the AI accounts were less likely to be fake than the genuine ones.
The research team set up their own mock Twitter feed that focused specifically on the war in Ukraine. That feed featured both real and generated profiles with tweets supporting both sides of the debate surrounding the conflict. The AI profiles used computer-generated synthetic profile pictures created via StyleGAN. Meanwhile, AI posts were generated by GPT-3, the same language model that is behind ChatGPT.
“Interestingly, the most divisive accounts on questions of accuracy and likelihood belonged to the genuine humans. One of the real profiles was mislabelled as fake by 41.5% of the participants who saw it. Meanwhile, one of the best-performing fake profiles was only labeled as a bot by 10%,” says Sippo Rossi, a PhD Fellow from the Centre for Business Data Analytics at the Department of Digitalization at Copenhagen Business School, in a university release.
“Our findings suggest that the technology for creating generated fake profiles has advanced to such a point that it is difficult to distinguish them from real profiles,” he adds.
“Previously it was a lot of work to create realistic fake profiles. Five years ago the average user did not have the technology to create fake profiles at this scale and easiness. Today it is very accessible and available to the many not just the few,” explains study co-author Raghava Rao Mukkamala, the Director of the Centre for Business Data Analytics at Department of Digitalization at Copenhagen Business School.

The rapid advancement of AI and deep-learning holds major implications for all of society, but the potential malicious uses of these technologies on social media is especially concerning. From political manipulation and misinformation to cyberbullying and cybercrime, it isn’t an exaggeration to say bad actors may have some seriously powerful AI tools at their disposal very soon. That is, if they don’t already.
“Authoritarian governments are flooding social media with seemingly supportive people to manipulate information so it’s essential to consider the potential consequences of these technologies carefully and work towards mitigating these negative impacts,” Raghava Rao Mukkamala adds.
Study authors made use of a simplified setting for this project; participants were shown just one tweet, and the profile information of the account that posted it. Moving forward, future studies should focus on if bots can be correctly identified from a news feed discussion featuring both fake and real profiles commenting on a news item in the same thread.
“We need new ways and new methods to deal with this as putting the genie back in the lamp is now virtually impossible. If humans are unable to detect fake profile and posts and to report them then it will have to be the role of automated detection, like removing accounts and ID verification and the development of other safeguards by the companies operating these social networking sites,” Sippo Rossi continues.
“Right now my advice would be to only trust people on social media that you know,” Rossi concludes.
The study is published in Hawaii International Conference on System Sciences (HICSS).