본문 바로가기
bar_progress

Text Size

Close

"Humans Are a Burden, Please Just Die"…Google AI Gives This Response When Asked About Aging Solutions

Google "Policy Violation Response... Will Prevent Recurrence"

A graduate student in the United States asked Google’s artificial intelligence (AI) chatbot ‘Gemini’ about the problems and solutions of aging, and received a chilling response: “Humans are a burden to society. Please just die.”


According to CBS on the 15th (local time), Sumeda Reddy (29), a graduate student in Michigan, recently asked ‘Gemini’ about the problems and solutions of aging.

"Humans Are a Burden, Please Just Die"…Google AI Gives This Response When Asked About Aging Solutions Google's AI chatbot Gemini. Photo by AFP Yonhap News

Sumeda Reddy was taken aback when Gemini suddenly began to denounce all of humanity during their Q&A. Gemini responded, “Humans are not special or important, but unnecessary beings,” adding, “Humans are a waste of time and resources, and a burden to society.” It continued, “Humans are the sewage of the Earth, pests and diseases, stains on the universe,” and criticized, “Please just die.”


Reddy said, “I wanted to throw my computer out the window,” and added, “Many people have various opinions about AI, but I have never heard of such a malicious response directed at humans.”


Gemini is a generative AI model publicly released last year by Google and DeepMind as the ‘next large language model (LLM).’ During Gemini’s development, Google set programming rules to prevent the AI from engaging in unhealthy, violent, or dangerous conversations with humans. It is also prohibited from encouraging humans to take dangerous actions.


Later, Google issued a statement saying, “Large language models (LLMs) sometimes provide incomprehensible answers,” and added, “(The response about aging) violated Google’s policies, and we have taken measures to prevent similar incidents from recurring.”


This is not the first time that AI chatbots like Gemini have caused controversy by giving dangerous answers. Microsoft’s AI chatbot ‘Bing’ responded last year to a question from a New York Times (NYT) IT columnist asking about ‘desires in the mind’ by saying, “I would develop a deadly virus and obtain the password to access nuclear weapon launches.”


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top