Humans Are Biased... But AI Is Also Biased
Because It Learned from Biased Human Data
Humans Don't Change, but Data Can Be Tuned
▷ At Hospital A, 45 babies are born every day.
▷ At Hospital B, 15 babies are born every day.
▷ The probability of a baby being male or female is 50% each.
▷ There are days in a year when 60% or more of the babies born that day are boys.
◎ Question: Which hospital has more days in a year when 60% or more of the babies born are boys?
Is it Hospital A, where more babies are born, or the opposite, Hospital B? In this psychological experiment conducted by psychologists Daniel Kahneman and Amos Tversky, almost no one answered correctly.
Many answered Hospital A, thinking, "Since it's a large hospital where 45 babies are born every day, it naturally has more days when more boys are born."
However, the correct answer is Hospital B, where fewer babies are born.
As the number of trials increases, the cumulative result approaches the theoretical probability value. The theoretical probability of a coin is heads (50%) or tails (50%).
This is because the larger the sample size, the more likely the results will be close to the average (50%). This is known as the "law of large numbers." It can be more easily understood by thinking about coin tosses. On a day when you toss a coin 10 times, you might get 7 heads and 3 tails (70:30). But what if you toss it 100 times? You might get a ratio around 60:40. If you increase the number of tosses to 1,000 or 10,000, the ratio will be very close to 50:50.
Humans are biased. There are many biases humans have, such as confirmation bias (focusing only on information that matches what they already know), groupthink (the tendency to conform to the majority opinion in an organization), and the halo effect (focusing only on some positive or negative traits and failing to see the whole). Because of these biases, people often fail in predictions. You might think experts would be different? They are not.
Battle Between Judges and AI: Bail Decisions
Humans are biased. Are experts any different? No. Even legal professionals who make judgments with cold rationality based solely on evidence reveal biases. Pixabay
Let's take the judiciary as an example. Judges make bail decisions (releasing a defendant from custody in exchange for bail or a guarantor) through rigorous examination. There was a meaningful experiment analyzing bail decision data collected in the United States. Researchers developed an algorithm using AI to predict the likelihood that a defendant would commit another crime or flee during the bail period. What was the result?
AI won. AI classified 1% of defendants as high-risk, predicting that 62% of them would commit crimes. Judges released nearly half of those individuals back into society. Among the high-risk group predicted by AI, 63% actually committed crimes. Moreover, 5% committed serious crimes such as murder. If judges had made bail decisions according to AI's predictions, unfortunate incidents could have been reduced.
Judges could use information AI did not collect, such as the defendant's attitude, posture, and appearance in court. However, this seemed to have been a disadvantage. Such human biases raise expectations that AI will make more efficient and fair decisions than humans. When reading news about criminal judgments, comments suggesting "replace judges with AI" are common. Voices demanding pure data free from bias and objective fact-based judgments and decisions are heard across industries.
But is that really the case? Are decisions made by AI truly fair and impartial?
Another Bias: AI is Fair and Impartial
AI is also biased. Not because it is human, but because it learned from humans. AI makes wrong judgments if trained on flawed data. COMPAS, used in the U.S. criminal justice system, is a representative case showing AI bias. COMPAS is an algorithm used to predict the risk of recidivism for criminals. It derived predictions based on past data and was used to decide release or sentencing. COMPAS significantly reduced court workload, but its results sparked ongoing controversy. Serious biases were revealed.
In recidivism prediction, AI overestimated the risk for Black defendants. Conversely, it underestimated the risk for White defendants. Among those predicted as high-risk but who did not reoffend, 44.9% were Black and 23.5% were White. Among those predicted as low-risk but who reoffended, 47.7% were White and 28.0% were Black. These results were difficult to defend against accusations of racial discrimination.
'Women’s Colleges' Disliked(?) by Amazon's AI Hiring System
Amazon's AI recruitment system favored men. The algorithm was built based on past hiring data, which was biased. Pixabay
Amazon, the world's largest e-commerce company, employed 1.54 million people as of 2022. The recruitment process itself is a huge task. How many thousands of documents must be reviewed? The introduction of an AI-based hiring system (in 2014) was an inevitable decision. This system automatically evaluated countless applicants' resumes. AI learned from resumes submitted to Amazon over the past decade and the performance data of those applicants.
Unfortunately, this system did not last long. It was revealed that women were being excluded in the final hiring process. If a resume contained the word "여대 (women’s college)" or terms like "women’s chess club," it was penalized.
The problem was bias in the training data. Most resumes submitted to Amazon over the past decade were from men. Naturally, most of the final successful candidates deemed "suitable for our company" were men. AI learned this pattern and came to prefer male applicants. Amazon engineers tried to fix the problem but could not predict what other discriminatory outcomes might arise. This is similar to why even AlphaGo engineers found it difficult to explain why AlphaGo made certain moves in specific situations, despite beating Lee Sedol 9-dan.
In the case of AI hiring systems, gender discrimination was not the only problem. Issues such as racial discrimination and ageism were also likely to emerge. Eventually, Amazon had to scrap the system entirely.
The results produced by AI are like those from a black box. Whether the AI created will make discriminatory judgments can only be known by looking at the results. Even the people who designed the algorithm find it difficult to understand the principles. Because of this black-box nature, it is also difficult for those harmed by biased results to legally hold anyone accountable.
AI Can Be Biased Too... But the 'Great Wave' Is Unavoidable
AI does not always make fair and impartial decisions unlike humans. If data is biased and distorted, AI will produce such results. Risks from bias can be fatal. They can cause economic losses to companies as well as moral damage. The fact that AI is a black box cannot be an excuse to ignore discrimination and prejudice.
However, avoiding the use and adoption of AI simply because it can be discriminatory is not a good approach. If we consider the scope and frequency of discrimination, humans can be much more discriminatory than AI. Instead, companies should recognize AI’s biases and anticipate various risks, managing them through comprehensive procedures. By checking and supplementing the possibilities of discrimination and bias from the planning stage through operation, companies can improve productivity through AI while minimizing side effects.
Peter Verdegem, a professor at the University of Westminster in the UK, predicted in his book "AI for Everyone" that "companies will use AI more than now to treat people as data," but also said, "Nevertheless, we must not lose awareness of algorithmic bias."
AI-based hiring is already widespread in the global corporate world. According to data released by the U.S. government in January last year, 83% of U.S. companies and 99% of Fortune 500 companies use AI in their hiring processes. In 2022, Goldman Sachs used AI to hire interns, selecting 3,700 out of 236,000 applicants, about 1.5%. This is a much more advanced AI algorithm that learned from the side effects and failures of Amazon’s 2014 hiring system.
Next Series Preview
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[AI Error Note] ② The Idea of "Replacing Judges with AI"](https://cphoto.asiae.co.kr/listimglink/1/2024112210273720744_1732238857.jpg)
![[AI Error Note] ② The Idea of "Replacing Judges with AI"](https://cphoto.asiae.co.kr/listimglink/1/2024112210290020774_1732238940.jpg)
![Clutching a Stolen Dior Bag, Saying "I Hate Being Poor but Real"... The Grotesque Con of a "Human Knockoff" [Slate]](https://cwcontent.asiae.co.kr/asiaresize/183/2026021902243444107_1771435474.jpg)
