본문 바로가기
bar_progress

Text Size

Close

The Human Judgment That Saved Humanity from Nuclear War [AI Mistake Log]

'The United States Launches Five Nuclear Missiles at the Soviet Union'
Instead of Immediate Retaliation, He Chose to Pause and Reflect on the System Alert
"It Doesn't Make Sense for the United States to Fire Only Five Missiles"
Why Humans Remain Irreplaceable in the Age of AI

Editor's NoteExamining failures is the shortcut to success. 'AI Mistake Log' explores cases of failure related to AI products, services, companies, and individuals.
The Human Judgment That Saved Humanity from Nuclear War [AI Mistake Log] Nuclear Button. Getty Images Bank

'Emergency, emergency.

The United States has launched intercontinental ballistic missiles (ICBMs)'


Shortly after midnight on September 26, 1983. Loud sirens began to blare in the secret bunker Serpukhov-15, southwest of Moscow, Russia.


The Soviet Union's early warning system reported that it had detected five ICBMs launched by the United States. The bunker descended into chaos.


At the time, Air Force Lieutenant Colonel Stanislav Petrov, who was 44 years old, was on duty that night. He had only a few minutes to make a decision and reach a conclusion. He went through the launch detection and re-verification procedures required by the system. The situation board in front of him showed five missiles approaching Moscow.


Petrov had regulations he was supposed to follow. He was required to report the alert and the results of the re-verification to his superiors. It was clear that the Soviet leadership would then issue the order to 'retaliate with nuclear missiles.' Petrov was paralyzed with fear.


"It felt like sitting on a hot frying pan.
The pressure was so intense that I couldn't even get up from my seat."

While the other officers on duty were panicking, Petrov tried to regain his composure. He thought to himself:


"If the United States were really starting a nuclear attack on Moscow...
wouldn't it be on a much larger scale than just five missiles?"

The Human Judgment That Saved Humanity from Nuclear War [AI Mistake Log] Stanislav Petrov. His heroic actions were revealed belatedly through media reports. Although he received various human rights awards and a UN commendation, he remained humble. In an interview late in his life, he said, "People call me a hero, which is actually a bit surprising. I never thought of myself as that kind of person. I just did what I had to do." International nonprofit organization PSR.

Petrov was also a computer engineer who had participated in developing the early warning system's code.


"And why is it only the airborne warning system that's going off?
The ground radar hasn't detected any signs of an attack."

Finally, Petrov called his superiors. He did not report that "the United States has launched nuclear missiles." Instead, he said, "It is a computer system malfunction."


Twenty minutes passed after the initial alert. No missiles landed on Soviet territory. Petrov's judgment had been correct.


The false alarm was caused by a Soviet satellite mistakenly detecting sunlight reflected off clouds. The events of that day were declassified only in 1998, and became known to the world through a report by the German daily newspaper Bild.


If Petrov had acted strictly according to the alert and the manual, this moment would not exist today.


Fatal Failure of a Perfect System: Aircraft Crash
The Human Judgment That Saved Humanity from Nuclear War [AI Mistake Log] The scene of recovering debris from flight AF447. Photo by AFP

On June 1, 2009, an airplane crashed over the Atlantic Ocean. Air France Flight 447 (AF447), traveling from Rio de Janeiro, Brazil to Paris, France, sank into the sea. There were no survivors among the 228 people on board.


The aircraft, an Airbus A330, was considered one of the safest airplanes at the time. It was equipped with state-of-the-art autopilot systems. Studies since the 1970s had shown that most aviation accidents were caused by human error, leading to a heavy reliance on autopilot systems in subsequent aircraft. The system was so reliable that pilots hardly needed to worry.


However, this approach led to a paradoxical outcome. The pilots of AF447 had become accustomed to relying solely on autopilot. They had very little manual flying experience. Unfortunately, as the plane passed through an area of bad weather, the aircraft was damaged and the autopilot disengaged. The system that indicated the plane's speed also malfunctioned. The pilots were extremely confused. They repeatedly made incorrect manual inputs, and ultimately, the plane crashed into the ocean.


The Paradox of Automation
The Human Judgment That Saved Humanity from Nuclear War [AI Mistake Log] A robot holding a human skull. Getty Images Bank A robot holding a human skull. Getty Images Bank

As automated systems become more advanced, people have fewer opportunities to gain hands-on experience. The case of AF447 illustrates this. Because the automated system was so reliable, the pilots' skills actually deteriorated over time.


Economist Tim Harford conceptualizes this as the 'paradox of automation.' In his book (Messy), he explained, "The better the automation system, the less experienced the human operators become, and the stranger the situations they will face." The more perfect the system, the more complacent humans become, and in a real crisis, they lose their ability to respond.


Harford cites the law of aviation engineer Earl Wiener: "Digital devices filter out small errors but create opportunities for big ones." Mechanization and automation prevent everyday mistakes, but at crucial moments, they can cause even bigger failures.


All companies and executives hope for a productivity revolution through AI. They believe AI is objective, accurate, unaffected by emotion, and incapable of making mistakes. They expect it will always make better decisions than imperfect humans.


However, the stories of Petrov and the Atlantic airplane crash tell a different tale. No matter how advanced a system is, it can still make unexpected errors. In such crisis situations, human judgment becomes even more important. Petrov did not blindly trust the system. True to his background as an engineer, he also understood the system's limitations. He considered context and circumstances as much as he did the rules.


In contrast, the AF447 pilots had become so dependent on the system that they lost their basic skills. A similar situation could arise in the era of fully autonomous vehicles. When fully autonomous vehicles become commonplace, the car will make most decisions during travel. But what happens if the driver suddenly has to take over? Someone who has not been focused on driving for a long time will find it difficult to respond properly in an emergency.


Harford says, "Automation is a marvelous technology, but if we trust it too much, consumers can actually be harmed." He emphasizes that the scope of automation should be minimized, allowing it only in controlled and straightforward situations. Although some criticize Harford's view as extreme and outdated, his arguments about automation are worth listening to.


Human-Centered AI
The Human Judgment That Saved Humanity from Nuclear War [AI Mistake Log] Ultimately, it is humans who complete and take responsibility for something. Photo by Getty Images Bank

There is no doubt that industrial development and increased human productivity have benefited from mechanization and automation. There is no need to fall into a dichotomy of humans versus machines (AI).


Petrov did not simply ignore the system's information; he critically examined it. He made his decision by combining the data provided by the system with his own experience and common sense. The same applies to the AF447 accident. The autopilot system itself was not the problem; the issue was that humans became overly dependent on the system. If the pilots had practiced manual flying regularly and understood the system's limitations, the accident could have been avoided.


The more technology advances, the more important the human role becomes. Machines are good at processing rules and data, but in exceptional and complex situations, human intuition and judgment are necessary.


The same is true in the age of AI. AI is more likely to support humans than to ultimately replace them. Humans should refer to AI's analysis and predictions, but the final decision must be their own.


To achieve this, we must first acknowledge the limitations of AI. No matter how sophisticated a system is, it can still make unexpected errors. And even if we rely on systems, humans must continue to practice and develop their core skills. Above all, we must not lose our critical thinking. In the end, humans must always be at the center.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top