ChatGPT makes it easier to plagiarize in schools, study warns

PLYMOUTH, United Kingdom — The rollout of AI-powered program ChatGPT has brought both hope and fear to multiple industries. The advanced chatbot allows businesses to streamline certain features, such as composing emails, providing help for brainstorming sessions, and delivering 24/7 customer support for people visiting a site. On the other hand, researchers also argue that ChatGPT makes it easier to plagiarize work and create false and discriminatory answers based on biases on the internet.

ChatGPT can help but cannot imitate the value of human writing. In the academic community, where writing papers is a huge part of the job, the new study finds ChatGPT created new problems for scientists using it in their research.

“With any new revolutionary technology—and this is a revolutionary technology— there will be winners and losers,” says senior author Reuben Shipway, a lecturer in marine biology at the University of Plymouth, in a university release. “The losers will be those that fail to adapt to a rapidly changing landscape. The winners will take a pragmatic approach and leverage this technology to their advantage.”

ChatGPT launched in November 2022

While there are other chatbots in existence, ChatGPT is the most advanced one currently on the market. While it can potentially revolutionize research and education, the authors suggest its formulaic algorithm is raising concerns about plagiarism and academic honesty.

“This latest AI development obviously brings huge challenges for universities, not least in testing student knowledge and teaching writing skills—but looking positively it is an opportunity for us to rethink what we want students to learn and why,” says study co-author Debby Cotton, director of academic practice and professor of higher education at Plymouth Marjon University. “I’d like to think that AI would enable us to automate some of the more administrative tasks academics do, allowing more time to be spent working with students.”

ChatGPT on smartphone
(Photo by Tada Images on Shutterstock)

The researchers used ChatGPT to write a paper, prompting the bot to create academic-style responses. These included writing an original academic paper with references, describing the implications of using an AI called GPT-3 for assessments in higher education and producing several creative titles for an academic research paper on the plagiarism claims universities may encounter when using ChatGPT. They also posed several questions to help the AI shape the article’s focus, including how academics can prevent students from plagiarizing with GPT-3 and if any technologies could check if a chatbot created the paper.

The AI provided several responses, which the study authors copied and pasted onto a manuscript. They ordered the content based on the structure suggested by ChatGPT and then added other references throughout the article.

Is banning ChatGPT the answer?

Readers of the paper would not discover the use of ChatGPT in writing the article until the paper’s discussion section. The researchers wrote this section without any of the software’s suggestions. In their discussion, the authors note the text produced by ChatGPT creates more sophisticated sentences than other chatbots and AI tools. However, much of its responses were formulaic, making it easy for AI-detection tools to determine whether a person or a software wrote the paper.

“Banning ChatGPT, as was done within New York schools, can only be a short-term solution while we think how to address the issues. AI is already widely accessible to students outside their institutions, and companies like Microsoft and Google are rapidly incorporating it into search engines and Office suites,” says corresponding author Peter Cotton, an associate professor in ecology at the University of Plymouth. “The chat (sic) is already out of the bag, and the challenge for universities will be to adapt to a paradigm where the use of AI is the expected norm.”

The study authors suggest schools need to rethink how they design their assessments and explain to students how using AI technology for certain tasks can count as academic dishonesty.

The study is published in Innovations in Education and Teaching International.

YouTube video

Follow on Google News

About the Author

Jocelyn Solis-Moreira

Jocelyn is a New York-based science journalist whose work has appeared in Discover Magazine, Health, and Live Science, among other publications. She holds a Master’s of Science in Psychology with a concentration in behavioral neuroscience and a Bachelor’s of Science in integrative neuroscience from Binghamton University. Jocelyn has reported on several medical and science topics ranging from coronavirus news to the latest findings in women’s health.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer

Leave a Reply

Your email address will not be published. Required fields are marked *