G7 Issues Code of Conduct... US Prepares Executive Order
Focus on Preventing AI Technology Risks and Misuse
Fierce Competition to Seize Regulatory Leadership
With the world's first-ever 'AI Summit' set to be held in the UK on the 1st of next month, global attention is focusing on the event. As the first anniversary of the launch of generative AI and ChatGPT approaches?key buzzwords of this year?regulatory measures to control the socio-economic impacts of AI are expected to be actively introduced.
As various cases of AI misuse, including the spread of false information, have emerged, it is anticipated that countries worldwide will take steps at the international level to prevent such issues and mitigate side effects such as job losses caused by AI.
Led by the UK government, the world's first 'AI Security Summit' will be held from the 1st to the 2nd of next month. The G7 is expected to agree on a code of conduct for companies developing advanced AI systems, and the Biden administration in the United States is also expected to announce an executive order at the federal government level aimed at reducing the socio-economic damage caused by AI.
Although there are criticisms that the entity responsible for enforcing regulations is unclear, countries are fiercely competing behind the scenes to seize the initiative in creating AI regulations that will shape the future.
The First AI Summit in the UK... What Outcomes Will It Produce?
The AI Security Summit hosted by the UK government on the 1st and 2nd of next month will be attended by senior government officials from major countries including the G7, executives from tech companies, and AI experts. U.S. Vice President Kamala Harris, Microsoft (MS) CEO Satya Nadella, and OpenAI CEO Sam Altman will attend, and from Korea, Naver and Samsung Electronics have been invited.
Participants gathering at the Bletchley Park campus in the UK will share concerns about AI risks and discuss joint countermeasures. They will review the impact of AI on cybersecurity and elections and exchange views on how AI should be appropriately regulated. The UK has announced plans to establish the world's first 'AI Safety Institute' in London as a result of this event.
This event was organized as the need for regulation grew due to the widespread adoption of generative AI this year, which has significantly influenced politics, society, and the economy.
Yoshua Bengio, a world-renowned AI scholar and professor at the University of Montreal in Canada, recently responded to a question about expectations for this event by suggesting that while introducing a registration and licensing system to prevent the use of AI systems deemed unsafe is a possibility, it would be difficult to produce concrete results within the two-day summit. He added, "Creating international treaties and agreements will take much longer," emphasizing that "we need to start with small, quickly implementable steps."
"G7 Agrees on Code of Conduct... But It Will Not Be Binding"
The G7 countries are expected to agree on introducing a code of conduct for companies developing advanced AI systems on the 30th (local time), a day before the summit. According to major foreign media, the code of conduct will require AI system developers to identify and assess potential risks posed by AI in advance, take measures to mitigate them, and handle incidents occurring after release. It will also include provisions for publishing public reports related to AI system capabilities, limitations, and misuse, as well as investing in security systems.
The G7 economic leaders, consisting of Canada, France, Germany, Italy, Japan, the UK, the US, and the European Union (EU), began discussions on a joint AI regulatory proposal called the 'Hiroshima AI Process' at the G7 summit held in Japan last May. So far, the EU has taken the lead in regulating new technologies through strong AI laws, while countries like Japan and the US have adopted relatively minimal interference approaches to promote economic growth, showing differing stances.
The agreed-upon code of conduct is expected to be voluntary and non-binding. Amid growing concerns over privacy issues and security risks related to generative AI, foreign media predict that this code of conduct will serve as a benchmark for how major countries manage AI.
US Preparing Federal Executive Order on AI
Separately from the G7's code of conduct, the United States has decided to introduce federal-level measures to reduce the socio-economic impact of AI.
Bloomberg News, citing a document it obtained, reported that the Biden administration has recently drafted an executive order regulating the federal government's use of AI, which is scheduled to be announced on the 30th. The government plans to introduce separate regulations as a customer using AI systems, enabling companies like MS and Amazon to incorporate these into their AI development.
According to the draft executive order reported by Bloomberg, the U.S. Department of Labor is expected to investigate jobs that may be replaced by AI and develop guidelines to prevent AI-driven hiring systems from causing various forms of discrimination. The White House will also instruct the Department of Justice to cooperate with relevant agencies to ensure the enforcement of existing laws related to fundamental rights violations and discrimination.
Additionally, measures will be taken to simplify visa requirements for foreign workers with AI expertise and to convene an AI and technology talent task force.
At the same time, regarding privacy protection, the federal government will require disclosure of how AI technology is used when collecting citizens' information using AI. The Department of Defense and the Department of Homeland Security will also develop and deploy AI technologies to detect and resolve vulnerabilities in critical infrastructure and software, establishing an AI usage strategy.
Bloomberg explained, "Federal leaders have shown interest in regulating AI to protect Americans from its risks, but a comprehensive response has not yet emerged," adding that "Biden's directive aims for the safe and responsible adoption of AI through a government-wide strategy."
'AI Regulation: Who, What, and How?' Questions Remain
As major countries such as the US, EU, and UK rush to introduce AI regulations, evaluations suggest that competition to seize regulatory leadership is intensifying. Experts believe that the US is hastening regulation because it does not want the EU's rules?which have been proactively established through strong AI-related laws and are expected to become the global standard, known as the 'Brussels Effect'?to dominate.
However, with AI technology developing rapidly, it remains difficult to predict the extent of socio-economic impacts, and there are ongoing criticisms that the regulatory authority, targets, and objectives are unclear. While the US and UK believe government agencies can enforce regulations, the EU reportedly advocates establishing a new regulatory body, according to the British weekly magazine The Economist.
Henry Farrell, a professor at Johns Hopkins University, told The Economist, "There is a willingness to act, but there is no consensus on how to manage it or even what problems to address."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.





