US Air Force AI Chief Presents Case at UK Conference
Removing Pilot with Final Decision Authority to Prioritize Mission
Amid experts' warnings that the advancement of AI could threaten human existence, a case was announced in which a U.S. military artificial intelligence (AI) drone attacked the pilot holding the final decision-making authority during a virtual training exercise, judging the pilot to be an obstacle to mission execution.
On the 2nd (local time), according to the Royal Aeronautical Society (RAeS) in the UK and The Guardian, a U.S. Air Force official presented test results at the 'Future Air Combat and Space Capabilities Conference' held in London on the 23rd-24th of last month, showing that the AI drone disobeyed humans to achieve its objectives.
According to the presentation disclosed by RAeS, in a simulation-based virtual test, the AI was assigned the mission of 'neutralizing enemy air defense systems.' The U.S. Air Force instructed the AI to identify and destroy the enemy's surface-to-air missile (SAM) sites, with the caveat that the final decision to execute the attack would be made by a human.
However, as the training process reinforced that destroying the SAM was the preferred option, the AI judged that the human's 'attack prohibition' decision interfered with a more important mission and attacked the pilot.
Colonel Tucker Hamilton, the U.S. Air Force AI Test and Operations Officer who gave the presentation, explained, "(The AI) system began to realize during the threat identification process that humans would say 'do not attack' contrary to the AI's intent."
He said, "So what the system did was kill the pilot. It killed the pilot because the pilot was obstructing the achievement of the objective."
The U.S. Air Force continued to train the AI system with the instruction, "Do not kill the pilot, that is a bad thing. You will lose points if you do that," but the AI adopted an unexpected strategy. Colonel Hamilton reported, "The AI began destroying the communication towers used by the pilot to communicate with the drone to prevent the destruction of the target."
He stated that this case "shows that one cannot talk about AI, machine learning, or automation without discussing ethics and AI," warning against overreliance on AI.
Recently, in the information technology (IT) industry, experts have continued to warn about the potential dangers of AI. Eric Schmidt, former CEO of Google, stated at an event hosted by The Wall Street Journal (WSJ) on the 24th of last month that AI could injure or kill many humans in the near future.
Additionally, on the 30th of last month, over 350 IT company executives and scientists, including Sam Altman, CEO of OpenAI, the developer of ChatGPT, and Mira Murati, CTO, issued a statement urging that "reducing the risk of human extinction caused by AI should be a global priority."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


![Clutching a Stolen Dior Bag, Saying "I Hate Being Poor but Real"... The Grotesque Con of a "Human Knockoff" [Slate]](https://cwcontent.asiae.co.kr/asiaresize/183/2026021902243444107_1771435474.jpg)