본문 바로가기
bar_progress

Text Size

Close

"I’ll Never Trust ChatGPT Again"... Woman Breaks Down in Tears After Missing Flight

"I’ll Never Trust ChatGPT Again"?Tearful Video Goes Viral with 6 Million Views
Growing Number of 'AI Misinformation Victims'
Experts: "AI is an Auxiliary Tool... Cross-Verification is Essential"

A Spanish couple claims they missed their flight after trusting information provided by ChatGPT, a generative artificial intelligence (AI) tool, sparking controversy. As incidents of harm caused by inaccurate AI-generated information continue to emerge, calls for a more cautious approach to AI usage are growing louder.


According to the UK’s Daily Mail on August 14 (local time), the couple was traveling from Barcelona, Spain to Puerto Rico. They were stopped during the boarding process after relying on ChatGPT’s advice that “no visa is required.”

"I’ll never trust ChatGPT again"-Tearful Video Draws 6 Million Views
"I’ll Never Trust ChatGPT Again"... Woman Breaks Down in Tears After Missing Flight A female traveler is walking through the airport in tears after being denied permission to travel. TikTok

In a video posted on the social media platform TikTok, the woman said, “I did a lot of research before the trip, and when I asked ChatGPT, it said I didn’t need a visa. I’m never trusting that XXX again,” breaking down in tears.


In reality, Spanish citizens do not need a visa to enter Puerto Rico, but because it is a U.S. territory, they must obtain an Electronic System for Travel Authorization (ESTA). Without an ESTA, airlines may deny boarding, or entry may be refused upon arrival.


The woman jokingly remarked, “I used to call ChatGPT ‘useless’ or insult it, and it’s as if it got revenge by giving me the wrong answer.” The video has attracted more than 6 million views, creating a sensation online. Ultimately, the couple acquired the necessary documents and were able to board their flight to Puerto Rico.

Growing Number of ‘AI Misinformation Incidents’

This is not the first time a user has suffered harm due to incorrect information from ChatGPT. Recently in the United States, a man in his 60s was hospitalized with hallucinations and delusions after following ChatGPT’s health advice and consuming sodium bromide instead of salt. Bromide compounds were used as sedatives in the 19th century, but their use was discontinued due to serious side effects. Excessive intake can cause neurological damage, skin rashes, and mental disorders; at one point, 8-10% of psychiatric hospitalizations in the U.S. were reportedly due to bromide poisoning.


Experts: “AI Is a Supplementary Tool... Cross-Verification Is Essential”

Experts warn that “AI responses are not guaranteed to be accurate and should never be treated as absolute.” They emphasize that reliance on AI should be minimized in fields where even minor errors can have major consequences, such as travel, healthcare, and law. Experts advise that when using AI, users should adhere to basic principles including ▲cross-verifying with official information ▲refraining from using AI in sensitive or high-risk areas ▲and recognizing AI as a supplementary reference tool.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top