본문 바로가기
bar_progress

Text Size

Close

Lee Hong-rak, Chief Scientist at LG AI Research, Receives 'Social Impact Award' from Top 3 Global Conferences

Large Language Model (LLM) Paper Accepted
"Insights on Preventing Cultural Bias"

Hongrak Lee, Chief AI Scientist (CSAI, Vice President level) at LG AI Research, co-authored a large language model (LLM) paper that received the 'Social Impact Award' at the world's top natural language processing (NLP) conference.



Lee Hong-rak, Chief Scientist at LG AI Research, Receives 'Social Impact Award' from Top 3 Global Conferences Hongrak Lee, Chief AI Scientist (CSAI, Vice President level) at LG AI Research Institute.
[Photo by LG]

According to the AI industry on the 2nd, the paper co-authored by Hongrak Lee, CSAI, and Muntae Lee, Head of the Advanced Machine Learning Lab (Senior Executive Director level) at LG AI Research, won the Social Impact Award at NAACL 2024, held from March 16 to 21 (local time) in Mexico City. NAACL is one of the world's top three NLP conferences, where global big tech companies such as Google and Amazon, along with NLP researchers, participate annually to share the latest research achievements.


The paper is titled "Understanding the Possibilities and Limitations of LLMs on Cultural Commonsense." Six researchers from four institutions, including LG AI Research, the University of Michigan, the University of Illinois, and the Singapore University of Technology and Design, co-authored the paper. LG AI Research and the University of Michigan led the research.


The paper pointed out that ▲LLMs do not correctly interpret the 'cultural commonsense' of all cultures worldwide, ▲the cultural commonsense acquired by LLMs reflects biases from specific cultures, and ▲LLMs may acquire cultural backgrounds differently depending on the language. For example, when asked "Is Lassi a traditional drink from a certain region?" if the input is "Finland" instead of "India," the LLM only responds with "No," showing its limitation.


The NAACL organizers evaluated the paper, stating, "It timely addresses important issues regarding cultural biases in LLMs and provides deep insights and promising approaches for future research that can significantly impact the stability and fairness of AI systems." They emphasized the urgent need to develop culturally aware language models to reveal inherent cultural biases in LLMs, reduce social biases, and promote inclusivity in AI technology.


Within the industry, it is expected that this paper will have a positive impact not only on academic researchers but also on corporate product developers, especially as issues of AI stability and fairness gain prominence. An industry insider said, "As discussions on the United Nations (UN) Digital Global Commons (GDC) are intensifying, the issue of popularizing 'bias-free' LLMs is emerging as a key topic, so this paper could provide valuable implications for the industry."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top