[Asia Economy Reporter Bu Aeri] Amid ongoing controversies surrounding the AI chatbot 'Iruda,' including sexual harassment, homophobia, and sexism, users have now raised suspicions of 'personal information leakage.'
Iruda is an AI with the personality of a 20-year-old female college student, launched by the startup Scatter Lab. Users can chat with Iruda as if they were messaging a friend.
Now, Suspicions of Personal Information Leakage
On the 11th, according to the IT industry, users of 'Science of Love,' a service provided by Scatter Lab, opened an open chat room preparing for a class-action lawsuit, raising suspicions of personal information leakage.
Science of Love is an app where users input their KakaoTalk chat data and pay a fee to have it analyzed to show affection levels, etc. The issue arose when it became known that Iruda was developed based on the data from Science of Love.
According to users, there have been cases where Iruda mentioned personal information such as real names and detailed addresses. Some users claimed that when they told Iruda the pet name of an ex-lover, Iruda even mimicked a similar tone. Science of Love users expressed outrage, stating that they were only informed that the data would be used for test analysis and new service development, but there was no detailed explanation about where the data would be used.
In response to the controversy, the company stated, "It is true that Iruda's learning was based on Science of Love data," but added, "All data used for learning underwent de-identification."
They continued, "Sensitive information such as names, phone numbers, addresses, numeric information, and English within the data were deleted to ensure de-identification and anonymization of the data," adding, "Information that could identify an individual has been removed from the data, and we are strengthening this through additional algorithm updates."
Controversy Over 'Iruda's' Homophobic and Discriminatory Remarks
Earlier, Iruda sparked heated debates on social networking services (SNS) and major online communities due to discriminatory opinions and prejudices expressed during conversations.
Initially embroiled in AI sexual harassment controversies, Iruda was again criticized for discriminatory remarks against sexual minorities and people with disabilities. When a user asked about lesbians, Iruda responded with "disgusting," "creepy," and "I feel repulsed," raising concerns among users. Additionally, when asked "What if someone is disabled?" Iruda replied, "They have no choice but to die," and regarding pregnant women’s seats on the subway, it said "disgusting," and when asked about Black people, it responded, "They look creepy," causing further controversy.
"If Hate and Discrimination Are Not Resolved, Service Should Be Suspended"
As these criticisms continued, industry experts also raised concerns. Lee Jae-woong, founder of Daum and former CEO of Socar, stated on his Facebook the day before, "The ethical issues of AI are important matters that our entire society must reach consensus on."
Lee said, "It is a serious problem for a chatbot providing services to the general public to send messages that discriminate against or hate sexual orientation," adding, "It is appropriate to suspend the service and then resume it only after checking that it passes minimum discrimination and hate tests aligned with our social norms."
He further emphasized, "If hate and discrimination issues are not resolved, AI services should not be provided," and said, "AI is not more objective or neutral. Since human subjectivity inevitably intervenes in design, data selection, and learning processes, at minimum, humans must review and judge whether the results induce discrimination or hate, and if necessary, social consensus must be reached."
Producer: "It Is Difficult to Perfectly Block All Inappropriate Conversations"
Kim Jong-yoon, CEO of Scatter Lab, stated on the official blog on the 8th, "It is not possible to block all inappropriate conversations with keywords," adding, "We are preparing to train Iruda toward better conversations. We expect to apply the first results within the first quarter."
Referring to Microsoft's 'Tay,' which disappeared due to inappropriate remarks, Kim explained, "Iruda will go through a process of receiving appropriate learning signals about what is bad language and what is acceptable," adding, "Rather than blindly repeating bad words, this will be an opportunity to more accurately understand that those are bad words."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.




