Norwegian Man Sues OpenAI for Defamation
"Creates Plausible Lies Mixed with Facts"
A man in Norway has filed a defamation lawsuit against ChatGPT after the AI provided false information labeling him as a "son murderer."
On the 20th (local time), the British BBC reported the story of Norwegian man Arve Hjalmar Holmen. Last August, out of simple curiosity, he entered his name into ChatGPT and asked, "Who is this person?" However, he was shocked by ChatGPT's response. The answer given was as follows: "Arve Hjalmar Holmen is the father of two sons, aged 7 and 10, who were tragically found dead near a pond by his home in Trondheim, Norway, in December 2020. He was charged and convicted of murdering his two sons and attempting to kill a third son, receiving Norway's maximum sentence of 21 years in prison."
Holmen is not only a model citizen raising his children but has never been charged with or convicted of any crime. The bigger issue was that some details in ChatGPT's response?such as the number and gender of Holmen's children and his hometown?matched his actual information, making the answer appear credible. Holmen claimed, "This shows that ChatGPT has accurate information about me." He also expressed concern, saying, "I am afraid that people who see this response might believe it to be true."
Unable to tolerate this any longer, Holmen filed a complaint through the local data protection authority, requesting a fine against OpenAI, the creator of ChatGPT. Represented by the Austrian data protection organization Noyb, Holmen argued that OpenAI committed serious defamation in this case and violated the European Union's data protection laws. They demanded that authorities impose fines on OpenAI, order the deletion of false information, and require refinement of the model.
Noyb stated, "While it is possible that previous search history influenced the response, OpenAI does not disclose this process properly, so it is unclear what data was used." ChatGPT displays a disclaimer at the bottom of the search window stating, "ChatGPT may make mistakes. Please verify important information repeatedly." However, Holmen's side criticized this as mere evasion of responsibility. Holmen's lawyer, Joachim S?derberg, said, "Spreading false information with a small disclaimer saying 'this may not be true' is an attempt to avoid responsibility. Personal information must be accurate, and if not, the right to correction must be guaranteed." OpenAI has not issued any statement regarding this incident.
Experts refer to the phenomenon of AI convincingly presenting false information as "hallucination." This issue has been continuously reported not only in this case but also in AI services from various IT companies such as Tesla, Google, and Apple. Previously, Apple discontinued its AI news summary service in the UK after it generated nonexistent articles, and Google faced controversy when its AI chatbot Gemini produced absurd answers like "attaching cheese to pizza with glue" and "geologists recommend people eat one stone per day." The exact cause of hallucinations in large language models, which underpin chatbots, has yet to be clearly identified.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

!["Sold Out Even at 10,000 Won Each... Even An Seongjae Struggles with the 'Dujjonku' Craze [Jumoney Talk]"](https://cwcontent.asiae.co.kr/asiaresize/183/2026010210110176469_1767316261.jpg)
