Sam Altman, ABC News Interview
"Could Be Used in Cyber Attacks"
"Fear of Dangerous Use Keeps Me Awake at Night"
The CEO of OpenAI, the developer of the generative artificial intelligence (AI) 'Chat GPT' that has sparked a global craze, has voiced concerns about the risks of AI technology.
Sam Altman, CEO of OpenAI, said in an interview with ABC News on the 17th (local time), "AI technology carries existential risks, but it could also become the greatest technology ever developed by humans."
However, he emphasized, "We need to be cautious about this," adding, "People should be glad that we are somewhat afraid of this (AI)."
This interview was conducted in line with the release of OpenAI's new large-scale language model, 'GPT-4,' unveiled on the 14th. While previous models could process about 3,000 words of text, GPT-4 can analyze and process over 25,000 words at once. Another major difference is its ability to see and understand images as well as text.
Moreover, even users without basic coding knowledge can create games themselves. It has also scored within the top 10% on various exams, including the U.S. bar exam.
Altman admitted that GPT-4 is "not perfect" and revealed that he loses sleep over fears that such technology could be used dangerously. In particular, he is concerned that AI could be used to spread large-scale misinformation.
He pointed out, "(AI technology) could be used for cyberattacks," adding, "There may be people who, unlike us, do not impose safety restrictions." He continued, "There won't be much time for society to figure out how to handle or limit them."
Altman also addressed the issue of Chat GPT delivering incorrect information. He emphasized, "The issue I want to warn people most about is hallucination."
He stated, "This model can present completely fabricated stories as if they were true," and argued that it should be seen as an inference engine rather than a fact database. He explained that it is not a model that remembers and delivers factual information, but one that improves language understanding to predict the next word.
Additionally, he warned that although GPT-4 has improved accuracy by more than 40% compared to the previous GPT-3.5, it should not be used as a primary source of information.
Furthermore, regarding the possibility of AI taking jobs, he said, "Humanity has perfectly adapted to massive technological transitions," but added, "However, the fact that a technological transition could occur within a single-digit number of years is also a concern for me."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


