A case has emerged where a U.S. Air Force artificial intelligence (AI) drone attacked its operator after determining that the operator was obstructing the achievement of a target during a virtual training exercise. Recently, the so-called 'Pentagon explosion photo,' showing a plume of smoke rising next to the Pentagon, the U.S. Department of Defense headquarters, has caused a global stir. The financial markets fluctuated with just this one photo released online. Immediately after rumors of an explosion spread, the Dow Jones Industrial Average plunged nearly 80 points within four minutes. Although it was later revealed that the photo was a fake created by generative AI, the impact was significant.
There is a growing call for regulations that keep pace with the rapid evolution of AI technology. This is due to increasing concerns about the side effects and misuse cases of generative AI such as ChatGPT.
AI leaders including Sam Altman, CEO of OpenAI, and Demis Hassabis, CEO of Google DeepMind, have been the first to warn about the dangers of AI. The nonprofit AI Safety Center released a statement on the 30th of last month (local time) declaring that mitigating the risk of extinction caused by AI should be prioritized globally alongside other societal-scale risks such as pandemics and nuclear war. Over 350 scientists in the AI field, including these leaders, signed the statement. Dr. Geoffrey Hinton, known as the 'father of AI,' recently resigned from Google, stating, "I have spent my life trying to build computers that match the human brain, but now computers will surpass humans regardless of what they learn."
Executives of global companies that have secured the AI market emphasize the need for regulation because they believe that an increase in cases of AI misuse could cause serious social disruption. Countries around the world are accelerating the establishment of 'AI regulatory policies.' The first to discuss the introduction of AI regulation laws was the European Union (EU). The EU plans to put the 'Artificial Intelligence Act (AI Act),' which has been under discussion for two years, to a vote at the European Parliament plenary session next month. The bill classifies AI applications into four risk levels and prohibits the use of the highest-risk 'unacceptable' category (which includes potential harm to human operators and vulnerable groups).
In the United States, discussions are underway on the 'Algorithm Accountability Act,' which addresses liability for damages caused by AI's erroneous decisions, and the 'Data Privacy Protection Act,' which aims to prevent AI from collecting personal data. China has also released a draft guideline containing regulations that AI companies must follow. Companies must undergo security evaluations by authorities before launching AI-related services, and users are required to use their real names. Service providers must establish measures to prevent recurrence within three months if AI produces inappropriate responses. Failure to comply with these regulations may result in fines or service suspensions.
South Korea is also preparing countermeasures. More than ten AI-related bills have been proposed in the National Assembly, most of which are regulatory laws. The government is systematically reviewing these as well. First, the government has decided to invest 400 billion KRW over five years in security technology research and development (R&D). Additionally, the Korea Internet & Security Agency plans to nurture 50 security companies by 2025. Professor Choi Kyung-jin of Gachon University’s Department of Law said, "Hasty AI regulatory moves could be counterproductive, so preparations must be precise and meticulous," adding, "Security and regulatory levels should be differentiated according to risk."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[The Two Faces of AI]③ Even ChatGPT's Creator Calls for Regulation: "There Must Be Brakes"](https://cphoto.asiae.co.kr/listimglink/1/2023052508194342306_1684970383.jpg)

