PHILADELPHIA — Recent advances in artificial intelligence, such as chatbots like ChatGPT, have sparked both enthusiasm and concern in educational circles. Although the technology offers promising AI-driven solutions, it has also stirred up apprehension about its potential misuse in academia. So, what happens when someone thinks your work is just a plagiarized product of technology?
A new study delves into college students’ sentiments after being accused of using ChatGPT to cheat on assignments. Drexel University researchers based their findings on 49 Reddit posts and the ensuing conversations they sparked. These posts were created by students who were flagged for allegedly using ChatGPT on assignments.
The predominant sentiments conveyed were:
- Frustration: A majority of the students claimed innocence. Out of 49 posts, 38 students stated they hadn’t used ChatGPT. However, detection tools like Turnitin or GPTZero had mistakenly identified their work as AI-generated. This has led students to seek advice on how to prove their innocence.
- Distrust in Detection Systems: Many expressed doubts about the accuracy of these detection programs, leading to discussions about how to safeguard against false accusations.
- Reevaluation of Higher Education: Students began questioning the role of universities in the age of AI, with some fearing that overreliance on imperfect detection tools could jeopardize their academic futures.
“As the world of higher ed collectively scrambles to understand and develop best practices and policies around the use of tools like ChatGPT, it’s vital for us to understand how the fascination, anxiety and fear that comes with adopting any new educational technology also affects the students who are going through their own process of figuring out how to use it,” says study author Dr. Tim Gorichanaz in a university release.
Gorichanaz further commented on the palpable tension between students and educational institutions. The study underscored the dwindling trust between students and their professors, with some students feeling that they were perpetually under suspicion. Phrases like “Of course she trusts that AI detector more than she trusts us” underscore the gravity of the trust gap.
Researchers also unearthed discrepancies in academic policies related to AI use.
“There were comments about policy inconsistencies where students were punished for using some AI tools such as ChatGPT but encouraged to use other AI tools like Grammarly. Other students suggested that using generative AI to write a paper should not be considered plagiarism because it is original work,” notes Dr. Gorichanaz. “Many students reached the same conclusion that universities have been grappling with: the need to responsibility integrate the technology and move beyond essays for learning assessment.”
Emphasizing the ramifications of unjust accusations, Dr. Gorichanaz opined that they could severely damage the student-teacher relationship, which is pivotal for a rewarding educational journey.
“While this is a relatively small sample, these findings are still useful for understanding what students are going through right now,” explains Dr. Gorichanaz. “Being wrongly accused, or constantly under suspicion, of using AI to cheat can be a harrowing experience for students. It can damage the trust that’s so important to a quality educational experience. So, institutions must develop consistent policies, clearly communicate them to students and understand the limitations of detection technology.”
Considering the high false-positive rates even with advanced AI detectors, researchers suggest a paradigm shift in academic assessments.
“Rather than attempting to use AI detectors to evaluate whether these assessments are genuine, instructors may be better off designing different kinds of assessments: those that emphasize process over product or more frequent, lower-stakes assessments,” Dr. Gorichanaz concludes.
The study author also hinted at the incorporation of modules teaching the right use of generative AI technology rather than an outright ban.
The study is published in the journal Learning: Research and Practice.