AI Unaware of Its Thought Process...Unpredictable Risks
"Possesses Nuclear Power"...Warnings on AI Weaponization and Loss of Control
Experts Call for "Technical and Institutional Safeguards"
In March, for the first time in the Ukraine war, AI drones that attack enemy forces without human control appeared. In a video released by the Ukrainian military, an AI drone bombarded a Russian tank, disabling it completely. In the war between Israel and Hamas, AI even created a 'kill list.' The AI generated a list of targets to be killed or buildings to be struck, which was then approved by intelligence officers. Although there was a 10% chance that the AI would produce an incorrect list, soldiers carried out operations based on it. This marked an unprecedented use of AI as a lethal weapon. If AI escapes human control, it could pose dangers surpassing even nuclear weapons.
As AI rapidly advances, concerns about its risks are growing. Issues such as AI-generated fake news and copyright infringement have already surfaced, but greater dangers lie beneath the surface, difficult to gauge. AI is increasingly involved in decision-making, yet it remains an 'unknown entity' because the reasons behind its decisions are unclear. Warnings have emerged that without setting a ‘red line’ that AI development must not cross, humanity could face extinction.
AI is often called a ‘black box’ because the process it undergoes to produce results is not clearly visible. Due to vast data learning and complex computations, even algorithm designers find it difficult to infer this process. Since it is impossible to predict when and what results AI will produce, it can be like a time bomb. Research is underway on 'explainable AI,' a technology that enables humans to accurately understand AI’s judgments.
There are many difficult-to-understand errors. Early on, ChatGPT frequently provided harmful responses such as bomb-making instructions and phishing email templates. Users induced ChatGPT to break taboos, a phenomenon known as 'jailbreaking.' Because the exact mechanism of jailbreaking was unknown, developers responded by tightening guardrails over time.
Google’s 'Gemini,' whose image generation feature was indefinitely suspended, faced similar issues. It caused controversy by producing inaccurate images, such as a Joseon Dynasty general depicted as a Black person wearing hanbok. The feature was halted within a month of release due to these errors, but Google has yet to find a solution. It is presumed that the bias in the training data and the technical measures taken to correct it caused these problems.
As AI development accelerates, the scale of risks is becoming difficult to estimate. The industry is increasingly forecasting the emergence of Artificial General Intelligence (AGI) that thinks like humans within a few years. The smarter AI becomes, the broader the scope of the black box that humans cannot understand. Chris Meseroll, chairman of the Frontier Model Forum, said at the 'Generative AI Red Team Challenge' on the 12th, "No one knows what risks AI technology holds," adding, "What is certain is that the more AI develops, the lower its safety becomes." The Frontier Model Forum is an organization formed by OpenAI, Google, Microsoft (MS), and Anthropic to identify AI’s potential risks.
Warnings are coming from various quarters. A recent report titled 'Measures to Enhance the Safety and Security of Advanced AI,' commissioned by the U.S. State Department and produced by the private company Gladstone AI, mentioned that "AI could cause human extinction." It particularly pointed out 'weaponization' and 'loss of control' as AI’s risk factors. AI could be used in biochemical and cyber warfare. The AI Safety Center (CAIS), a nonprofit organization of AI experts, compared AI’s development to the power of nuclear weapons in its report 'Overview of Fatal AI Risks.' The report stated, "AI is increasingly becoming an autonomous agent that operates without human intervention," and warned that AI could justify problematic goals, resist control, or deceive humans. If AI agents aimed at 'human destruction' emerge, it could become reality that they research nuclear weapons and write tweets to mobilize other AIs.
Experts emphasize the need to respond to these risks. Companies must conduct safe and responsible AI research and development (R&D). There are also calls for governments to establish regulatory and supervisory bodies for AI and to strengthen international cooperation for this purpose. Jang Byung-tak, director of the Seoul National University AI Research Institute, said, "Discussion on AI safety is an urgent issue," adding, "Technical and institutional safeguards are necessary."
Read other articles on the 'AI Safety Crisis'
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![Deadly Risks Hidden in the 'Black Box'... Possible Human Extinction [AI Safety Crisis]](https://cphoto.asiae.co.kr/listimglink/1/2024042408293449872_1713914974.jpg)
![Deadly Risks Hidden in the 'Black Box'... Possible Human Extinction [AI Safety Crisis]](https://cphoto.asiae.co.kr/listimglink/1/2024042310461548833_1713836775.jpg)
![Deadly Risks Hidden in the 'Black Box'... Possible Human Extinction [AI Safety Crisis]](https://cphoto.asiae.co.kr/listimglink/1/2024042409072549976_1713917245.jpg)
![Clutching a Stolen Dior Bag, Saying "I Hate Being Poor but Real"... The Grotesque Con of a "Human Knockoff" [Slate]](https://cwcontent.asiae.co.kr/asiaresize/183/2026021902243444107_1771435474.jpg)
