본문 바로가기
bar_progress

Text Size

Close

OpenAI Does Not Disclose AI Technology Chatroom Hacking Incident

ChatGPT developer OpenAI was hacked early last year through its internal messaging system but did not disclose this publicly, the New York Times (NYT) reported on the 4th (local time).

OpenAI Does Not Disclose AI Technology Chatroom Hacking Incident

According to multiple sources, at the time, the hacker infiltrated an online chat room where OpenAI employees discussed the latest artificial intelligence (AI) technologies and extracted information. It was confirmed that the hacker did not access the systems where OpenAI’s AI model GPT is built and trained.


OpenAI management disclosed this fact to employees at an all-hands meeting held at the San Francisco office in April last year, shortly after the hacking. However, management decided not to make the hacking public, citing that no customer or partner information was compromised. They also did not report the incident to law enforcement agencies such as the FBI. A source explained, "Management judged that the hacker was an individual unrelated to any foreign government and did not consider this incident a threat to national security."


However, this incident raised concerns among some OpenAI employees that overseas hacking groups, including those from China, could steal AI technology. Questions were also raised about how seriously OpenAI was addressing security issues. The NYT reported, "Although most are currently work and research tools, they could eventually threaten U.S. national security," adding that "internal conflicts within the company regarding AI risks have also surfaced."


Shortly after the hacking, Leopold Aschenbrenner, OpenAI’s head of technical programs, sent a memo to the OpenAI board claiming that the company had not made sufficient efforts to prevent foreign hacking groups, including the Chinese government, from stealing its confidential information. Aschenbrenner was dismissed earlier this year for leaking other information externally. He recently appeared on a podcast and stated that OpenAI’s security is not strong enough to protect core secrets from foreign intrusions.


The NYT noted that while expert analyses on AI technology leaks and national security are currently divided, concerns that China could be involved in hacking targeting U.S. tech companies are not entirely unfounded. Brad Smith, president of Microsoft (MS), testified before Congress last month about how Chinese hackers breached MS’s cloud security and launched widespread attacks on federal government networks.


Some researchers and national security officials argue that even if the core mathematical algorithms of AI systems do not currently pose a national security threat, they could become dangerous in the future and are calling for strengthened controls related to AI research.


Susan Rice, a former official in the Barack Obama administration, warned at an event held in Silicon Valley last month, "Even if the worst-case scenario has a relatively low probability, if it has a large impact, it is our responsibility to take it seriously," adding, "This is not science fiction." OpenAI recently established a safety and security committee to recommend AI safety measures.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top