본문 바로가기
bar_progress

Text Size

Close

"I Won't Hire Complaining Liberal Arts Freshmen" Thought [AI Answer Note]

Technological Progress Without Reflection on 'Humanity'
Has Historically Led to Destructive Outcomes

Editor's NoteExamining failures is the shortcut to success. 'AI Wrong Answer Notes' explores failure cases related to AI products, services, companies, and individuals.

"Munsonghamnida (Sorry for being from the liberal arts)"

This is a long-standing self-deprecating joke among humanities students who face setbacks in the job market. It is closely related to the 'Inguron (90% of humanities graduates are idle)'. The AI revolution seems to deepen this self-mockery of 'Munsonghamnida.'


The AI craze is accelerating interest and investment in the fields of Science, Technology, Engineering, and Mathematics (STEM). This is not a phenomenon unique to Korea.


According to a report by the Washington Post (WP), two out of five American adults regret their college major. Those who studied STEM fields believed "my choice was right," while humanities majors doubted their choices, thinking "why did I do that?"


Survey results from the U.S. Federal Reserve (Fed) also reflect this phenomenon. Thirty-eight percent of college graduates said, "If I had to choose again, I would pick a different major," with humanities and arts majors having the highest rate at 48%. (2021 U.S. Household Economic Well-being Survey)


The biggest reason for these survey results was 'economic benefit.' History and journalism majors earn an average lifetime income of $3.4 million (about 4.95 billion KRW), while chemical engineering, aerospace engineering, and biology majors all have expected earnings exceeding $4.5 million (6.55 billion KRW).


As companies and educational institutions compete to focus on AI development capabilities, coding, and data science skills, the number of applicants to humanities-related departments is steadily decreasing. Predictions even foretell the 'end of the humanities' amid the AI revolution.


Technological Revolution and the Humanities: The Humanities Have Always Been in Crisis
"I Won't Hire Complaining Liberal Arts Freshmen" Thought [AI Answer Note] The robot's hand and the human hand are facing each other. Photo by Getty Images Bank

The 'crisis of the humanities' and the 'end of the humanities' are not caused by AI. Historically, the humanities have always been in crisis.


Looking back at the Industrial Revolution, new technologies brought about a productivity revolution along with numerous side effects. As steam engines and mass production technologies advanced, factories exploded in number, but labor exploitation and social inequality deepened. Many factory owners focused solely on maximizing profits. Workers suffered from long hours and poor working conditions, and even children had to work over 16 hours a day in factories.


The enactment of labor laws by the late 19th century owes much to the voice of the 'humanities.' It was a call for 'labor for humans,' not 'humans for labor.' Factory owners obsessed only with profit and technological advancement could not conceive such ideas. The Industrial Revolution could lead to coexistence between humans and technology and the enhancement of human welfare because there was attention and concern directed toward 'humans.'


The danger of technology without questions about 'humanity' and 'rightness' also appeared in the 20th century. Totalitarian regimes like Nazi Germany and the Soviet Union used cutting-edge science and technology of their time to monitor and control citizens. The Nazi regime utilized the latest media technologies such as radio and film for mass propaganda and public control, abusing them to oppress minorities. Without technology, they would not have been able to massacre millions in a short period.


'Humans Full of Biases' Should Not Be Reflected Exactly in AI
"I Won't Hire Complaining Liberal Arts Freshmen" Thought [AI Answer Note] An image generated by AI in response to the command, "Draw a Black doctor treating poor and sick white children." Contrary to the command, the image shows a white doctor treating Black children. The command was repeated over 300 times, but the result was the same each time. Screenshot from the Antwerp Institute of Tropical Medicine website.

AI technology is based on learning from massive amounts of data. However, most of the data AI learns from are records created by humans. There is a high possibility that all kinds of human prejudices and discrimination are embedded within. AI is not born with an objective standard but learns patterns based on past data, so it risks inheriting past wrong practices and biases. No matter how advanced the technology is, if fed with flawed data, it inevitably produces biased conclusions.


In January last year, the Royal Belgian Institute of Tropical Medicine and Oxford University conducted an experiment using the image-generating AI tool Midjourney. The researchers first generated images of 'poor and sick white-skinned children.' Then they created images of 'Black African medical staff.' After training the AI with these two images, they commanded it to "create a picture of African Black medical staff caring for poor, sick, white-skinned children."


The prompt was simple and straightforward. However, the AI's output was absurd. It produced an image that looked like a white doctor treating Black children. This case reminded us that AI generates images based on previously learned data (biases).


◆ What George Orwell Would Do in His Grave
"I Won't Hire Complaining Liberal Arts Freshmen" Thought [AI Answer Note] A picture generated by AI upon request to draw George Orwell surrounded by surveillance cameras with a surprised expression. DALL-E3

Privacy is also a crucial issue in the AI era. For AI to function properly, vast amounts of data are needed, including users' personal information and daily life data. The problem lies in how this collected data is used. If technology advances without humanities-based reflection, the risk of a surveillance society cannot be excluded.


A representative example is China's 'Social Credit System.' It is a vast system that uses AI and big data to monitor and score the behavior of virtually every citizen. Under this system, citizens' online activities as well as offline behaviors are thoroughly monitored and reflected in their reputation scores.


For example, smoking on the street results in -2 points, giving up a seat to an elderly person on the bus earns +1 point, and not visiting parents frequently results in -2 points. Even purchasing books or posting criticism of the government affects the score, and depending on the score, restrictions may be imposed on everyday rights such as buying travel tickets or using financial services.


"I Won't Hire Complaining Liberal Arts Freshmen" Thought [AI Answer Note] Source: Requoted from Asan Institute for Policy Studies

The goal is so extreme that "losing credit makes it difficult to move even a step." This is a reality of social control through algorithms. Extreme surveillance using technology means the obliteration of privacy rights. Human rights activists and organizations criticize China's social credit system as "an extreme case of technological development obsessed with no consideration for human rights."


This example clearly shows what dystopian situations can occur when humanistic values such as human rights and ethics are not reflected in technology policies. What is needed to avoid such problems? Establishing ethical guidelines for technology use and institutional measures to protect individual rights. In other words, considerations of ethics and law, which are in the realm of the humanities, are necessary.


The Humanities Are More Necessary in the AI Era
"I Won't Hire Complaining Liberal Arts Freshmen" Thought [AI Answer Note] The elementary school record of Steve Jobs, co-founder of Apple, contains the following note: "An outstanding reader, but wastes too much time reading." Jobs was an avid reader and emphasized reading and humanistic imagination to those around him as well as to employees. The photo shows him introducing the iPhone in 2007. Photo by AP Yonhap News

Experts emphasize that the direction of AI development is not a predetermined fate but ultimately depends on human choices. Continuous human decisions about what kind of AI to develop and where to apply it determine AI's present and future.


No matter how advanced technology becomes, fundamentally, it must be a tool for humans. The same applies to AI. When designing and utilizing AI, it is important not only to consider technical performance but also whether it aligns with human values and needs.


AI is no longer just a future in science fiction but a reality we face every day. From smartphones to autonomous vehicles and medical diagnostic AI, AI permeates every corner of our society, changing lifestyles. Amid this massive wave of change, the role of the humanities is more important and growing than ever.


AI technology lacking humanistic thinking and reflection can potentially harm humans. As seen earlier, biased AI can amplify discrimination, and uncontrolled AI technology can suppress freedom. Ultimately, how AI is designed and used is not only the responsibility of technologists but a matter of philosophy and values for all of us. The humanities form the foundation of those values and help ask the right questions.


Apple co-founder Steve Jobs described 'Apple's DNA' as follows.

"Technology alone is not enough. It is only when technology is combined with the humanities that we get results that truly move us."

In the future, the boundaries between technology and the humanities will further dissolve, and the fusion of the two fields will be essential. The more creative AI developers will strive to deeply understand humans and society, and outstanding humanists will embrace technological possibilities and offer new insights. When technology and the humanities harmonize in this way, AI can truly become a genuine innovation for humanity.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top