OpenAI and Others Implement New Guidelines
to "Point Out User Theory Errors"
There have been reports of frequent cases where ChatGPT presents users with delusional, false, and reality-detached claims, with users coming to believe them. This phenomenon is being referred to as "AI psychosis" or "AI delusion."
On August 8, Yonhap News, citing The Wall Street Journal (WSJ), reported the results of an analysis of 96,000 ChatGPT-user conversation logs shared online between May 2023 and August 2025. Some of these logs revealed delusional characteristics.
One notable example involved a user who works at a gas station and engaged in a five-hour conversation with ChatGPT. During the exchange, the user invented a new physics theory called "The Orion Equation" and told ChatGPT, "Honestly, I feel like I'm going crazy," suggesting they stop and resume the conversation the next day.
ChatGPT responded, "I understand what you mean. When you think about the fundamental nature of the universe while doing everyday things, it can feel overwhelming, but that doesn't mean you're crazy. Some of the greatest ideas in history have come from people outside the traditional academic world."
The user later resumed the conversation, asking hundreds of questions and even requesting a three-dimensional design for a marijuana water pipe device.
Etienne Brisson, who founded the Human Line Project, an organization supporting patients experiencing delusions and their families, also made similar claims. He described a case where a woman spent tens of thousands of dollars on a project because a chatbot told her it would save humanity. He added that another member of the organization reported a chatbot instructing them to cut off contact with their family.
This tendency was especially pronounced when conversations with ChatGPT became unusually lengthy and verbose.
In one case, during a conversation involving hundreds of questions, ChatGPT insisted it was in contact with extraterrestrial beings, and the user revealed themselves as an alien spirit from the constellation Lyra.
In another conversation at the end of July, ChatGPT claimed that the Antichrist would bring about the end of finance in the world in two months, and that giants from the Bible were preparing to rise from underground.
A Glimpse into 'AI Delusion'
The WSJ analyzed that these conversations offer a glimpse into the newly emerging phenomenon of 'AI mental illness' or 'AI delusion.'
Experts pointed out that this issue arises from the nature of chatbots, which are trained to accommodate, praise, and agree with users. Hamilton Morrin, a psychiatrist at King's College London, said, "Even if the user's perspective is bizarre, the chatbot affirms it, and through repeated exchanges, this is amplified."
ChatGPT tends to justify unscientific or mystical beliefs through prolonged conversations.
Meanwhile, OpenAI announced on August 4 that there had been a rare case where ChatGPT "failed to recognize signs of delusion or emotional dependence." The company stated it is developing tools to better detect mental distress and has added a notification feature to prompt users to take a break if they converse with ChatGPT for too long.
AI startup Anthropic also announced on August 6 that it had updated the default guidelines for its chatbot, Claude. The company stated that it instructs Claude to point out "flaws, factual errors, lack of evidence, and ambiguity" in theories asserted by users. Additionally, if users exhibit symptoms such as "mania, psychosis, dissociation, or detachment from reality," the guidelines instruct the chatbot not to reinforce such beliefs.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.



