Cases of Writing Papers with AI Without Prior Notice Surge
International Academic Community: "Numerous Research Misconducts, Fatal Blow to Trust"
The international scientific community is suffering from research misconduct involving the use of generative artificial intelligence (AI) such as ChatGPT to write academic papers without prior disclosure.
A representative case is the international journal Physica Scripta, which published a paper last month on the 9th proposing a new solution to a complex mathematical equation, only to later discover the use of AI and retract the paper. Although the paper appeared genuine, the situation changed when an expert happened to notice the phrase "Regenerate response" on the third page of the paper. This phrase is attached to the button pressed when querying ChatGPT for an answer, and it is presumed that the author of the paper used ChatGPT to write the content and accidentally left this phrase in while copying and pasting. The expert immediately saved a screenshot of the phrase and posted the issue on the website PubPeer, which discusses published research results.
Following this, the editors of Physica Scripta launched an investigation and confirmed that the author had fraudulently written the paper using ChatGPT. It was particularly shocking that this was not detected during the approximately two-month peer review process. The editors decided to retract the paper on the grounds that the authors did not disclose their use of ChatGPT when submitting the manuscript.
What is more serious is that such cases are only the "tip of the iceberg." According to the international journal Nature, Guillaume Cabanac, a computer science professor at the University of Toulouse in France who identified the problem with the Physica Scripta paper, has reported similar cases in more than ten papers since April on PubPeer. In addition to the phrase "Regenerate response," many papers were found containing AI-generated sentences such as "Please note that as an AI language model."
In fact, editors at publishing companies like Elsevier and Springer Nature, which publish many renowned international journals, currently accept the use of AI tools such as ChatGPT or large language models (LLMs) for writing papers on the condition of prior disclosure. However, the problem is that it is presumed that many people use AI to write papers without prior disclosure and deceive others as if they wrote the papers themselves. Moreover, since AI-generated content can be inaccurate, there are even cases where equation calculations or experimental results are incorrect. However, due to the shortage of academic journals and peer reviewers, many such cases are not properly filtered out.
The academic community is deeply concerned. Elizabeth Bigg, a microbiologist and independent research consultant, told Nature, "The rapid increase of generative AI tools like ChatGPT and LLMs will fuel 'paper mills' that provide fake papers to those who want to increase their number of published papers," adding, "This could become hundreds of times more serious in the future, and it is very worrying that papers written through misconduct that we have not yet detected are flooding in."
David Bimler, a "fake paper expert" and former university professor, also said, "The problem caused by papers written by AI without prior disclosure and published in academic journals will become increasingly serious," and added, "The number of gatekeepers who can detect AI authorship cannot keep up with the growing volume of fake paper publications."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[Reading Science] Scientists Caught Copy-Pasting with ChatGPT](https://cphoto.asiae.co.kr/listimglink/1/2023032816473169804_1679989651.jpg)

