Controversy Erupts as Professor Who Banned AI Use Is Found to Have Used ChatGPT for Lecture Materials
Students Criticize "Double Standards" and Demand Tuition Refunds
A controversy has erupted after it was revealed that a professor at a U.S. university, who had defined the use of generative artificial intelligence (AI) tools such as ChatGPT by students for assignments as academic dishonesty, had himself used ChatGPT to prepare his lecture notes.
On May 14, the New York Times (NYT) reported that in February, Elan Stapleton, a senior at Northwestern University minoring in business administration, noticed something odd while reviewing lecture notes for an organizational behavior class that her professor had uploaded to the school system. In the middle of the notes, she found phrases such as "expand in all areas" and "write in more detail and specifically," which appeared to be prompts given to ChatGPT.
Stapleton then reviewed other lecture materials, including slides created by adjunct professor Rick Arrowood, who taught the course. She discovered typical errors often produced by generative AI, such as distortions in images of people and typos in the text.
Stapleton was shocked. The syllabus for the course explicitly stated that unauthorized use of AI or chatbots for assignments or exam answers constituted academic dishonesty. "The professor forbade us from using AI, but was using it himself," she said angrily. Stapleton filed an official complaint with the business school, demanding a refund of $8,000?the portion of her semester tuition allocated to the course (approximately 11.3 million won). Despite several meetings with business school officials, she was told that a tuition refund was not possible.
Adjunct professor Rick Arrowood told the NYT that he deeply regretted the incident. Having taught for nearly 20 years, he said he had uploaded his existing teaching materials, lecture notes, and other resources to ChatGPT, the AI search engine Perplexity, and the AI presentation service Gamma to create new materials. "In hindsight, I wish I had reviewed things more carefully," he said, adding, "I uploaded the materials to the school system to help students, but since the class is discussion-based, I never used them during lectures." Arrowood also stated that it was only after this incident that he realized materials created with AI assistance could contain errors.
Following the incident, Northwestern University issued an official AI usage guideline. The guideline requires that any use of AI must be disclosed, and that the accuracy and appropriateness of the resulting materials must be verified.
Meanwhile, the NYT reported that complaints have been increasing on popular lecture evaluation sites used by American college students, with students expressing frustration that their professors rely excessively on AI. They criticized what they saw as hypocrisy?"professors can use it but students cannot"?and argued, "We pay exorbitant tuition to be taught by humans, not to receive instruction from an algorithm that we could consult for free ourselves."
Last fall, a student who took an online anthropology class at Southern New Hampshire University claimed that the professor had not even read her assignment. She alleged that the feedback on her assignment contained instructions the professor had entered into ChatGPT. When she raised the issue in class, the professor responded, "I did read the students' assignments," and explained, "I only used ChatGPT as a guide, as permitted by school policy."
Professors who use generative AI argued that using AI to prepare for classes is helpful and allows them to devote more time to teaching. They said it reduces the burden of monotonous and mechanical tasks or answering students' basic questions, freeing up more time for student consultations and other educational activities.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


