Claim of Text Sent to Acquaintance During Hypothetical Scenario Conversation
Need for Controls and Safety Mechanisms as AI Agents Proliferate
Controversy is growing over claims that Google's artificial intelligence (AI) service, Gemini, sent text messages that did not align with users' intentions. As AI systems increasingly gain the ability to perform real-world actions, concerns are also being raised about insufficient user control and safety mechanisms.
According to the AI industry on January 29, a user identified as A recently shared a bewildering experience on social media while using Google's AI service, Gemini. During a conversation with Gemini that involved a hypothetical scenario about illegal entry into China, A claimed that the AI generated a so-called "declaration of illegal entry" and sent it as a text message to an acquaintance.
According to A's account, the problematic message was sent during the early morning hours, and the recipient was someone with whom A was not particularly close. A explained, "Gemini suddenly said it would send the declaration and asked for confirmation, so I replied, 'Why would you send that?' but it sent the message immediately."
After news of the incident spread, Android smartphone users began sharing similar experiences. Some users claimed, "The AI went out of control during a conversation and tried to call the National Human Rights Commission," while others said, "When I sought advice from Gemini about a crush, it tried to send a message to that person."
Currently, Gemini officially supports the ability to send text messages and make phone calls on Android smartphones. When a user requests to send a message to a specific contact, the system checks for integration with Google Assistant before actually sending the message.
Regarding these incidents, Google mentioned that users may have selected 'Yes' on the confirmation screen asking whether to send the message. However, concerns remain that if users inadvertently approve the action without fully understanding the intent during the conversation, sensitive information could be sent to unintended recipients.
Industry experts believe that as so-called "AI agent" technology, which allows AI to perform real-world actions, becomes more widespread, the risks of malfunctions or unintended actions will inevitably increase. There are growing calls for institutional and technical safeguards to protect users, in line with the rapid pace of technological advancement.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


