ChatGPT on smartphone

(Photo by Tada Images on Shutterstock)

SHEFFIELD, United Kingdom — AI systems, including the increasingly popular ChatGPT, could assist hackers launch cyberattacks on computer networks, a new study warns.

Researchers from the University of Sheffield’s Department of Computer Science identified how Text-to-SQL systems – AIs designed to search databases using plain language queries – can be exploited in real-world cyber crimes. These findings shed light on how AI systems might be manipulated to access sensitive information, tamper with databases, or initiate Denial-of-Service attacks.

Out of the six commercial AI tools evaluated – ChatGPT, BAIDU-UNIT, AI2SQL, AIHELPERBOT, Text2SQL, and ToolSKE – all were found to have security vulnerabilities. By asking these platforms specific questions, researchers could get them to generate malicious code. When executed, this code could disrupt database services, leak confidential data, or even destroy the database. For instance, on Baidu-UNIT, a Chinese dialogue customization app, the team obtained confidential server configurations and even took a server node offline.

“At the moment, ChatGPT is receiving a lot of attention. It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services,” says Xutan Peng, a Ph.D. student at the University of Sheffield and co-leader of the research, in a university release.

Person Looking At Computer Screen Hacked By Ransomware Hacker
(© Andrey Popov – stock.adobe.com)

One key concern raised by the study is the use of AI tools like ChatGPT for productivity.

“For example, a nurse could ask ChatGPT to write an SQL command so that they can interact with a database, such as one that stores clinical records. As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning,” explains Peng.

The study also reveals potential backdoor attacks on Text-to-SQL models, like embedding a “Trojan Horse” by contaminating training data.

“Users of Text-to-SQL systems should be aware of the potential risks highlighted in this work. Large language models, like those used in Text-to-SQL systems, are extremely powerful, but their behavior is complex and can be difficult to predict,” Dr. Mark Stevenson from the University of Sheffield cautions.

Recognizing the importance of the study, Baidu, a Chinese platform, deemed the vulnerabilities as critically hazardous. Following the findings, Baidu addressed the issues and compensated the research team for their pioneering efforts.

The researchers stress the need for a collaborative approach to cybersecurity, urging scientists to work collectively in open-source communities to mitigate risks.

“There will always be more advanced strategies being developed by attackers, which means security strategies must keep pace. To do so we need a new community to fight these next-generation attacks,” concludes Peng.

The study was presented at the International Symposium on Software Reliability Engineering (ISSRE) in Florence, Italy.

You might also be interested in:

South West News Service writer James Gamble contributed to this report.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

Chris Melore

Editor

Sophia Naughton

Associate Editor