본문 바로가기
bar_progress

Text Size

Close

[Reading Science] KAIST Finds Clues to AI That Learns Like the Brain

Uncovering How the Human Prefrontal Cortex Separates Goals and Uncertainty for Learning
Suggesting Solutions to the "Stability-Flexibility Dilemma" in Reinforcement Learning

Humans are able to make relatively stable judgments and swiftly adjust their strategies, even when goals suddenly change or situations become uncertain. In contrast, traditional reinforcement learning-based artificial intelligence (AI), such as AlphaGo, tends to be vulnerable to changes in goals and lacks flexibility in uncertain environments. A Korean research team has identified that this difference originates from the unique information processing structure of the human prefrontal cortex, providing clues for the development of AI that can learn as stably and flexibly as the human brain.


The Korea Advanced Institute of Science and Technology (KAIST) announced on December 14 that a research team led by Professor Sangwan Lee from the Department of Brain and Cognitive Sciences, in collaboration with IBM AI Research, has uncovered the core principles by which the human prefrontal cortex processes goal changes and environmental uncertainty. This study is recognized for fundamentally explaining the 'stability-flexibility dilemma' faced by conventional AI reinforcement learning and for suggesting a new direction for the design of next-generation learning algorithms.

[Reading Science] KAIST Finds Clues to AI That Learns Like the Brain The Fluidity-Stability Balance Between Humans and AI (Excerpt from a paper. Copyright: Nature Communications). The left illustration depicts a situation where the uncertainty of goals and environment continuously changes. It explains the concept of decision-making fluidity that adapts to changing goals and decision-making stability that remains unaffected by environmental changes. The right illustration shows the measurement results of the fluidity-stability balance in decision-making between AI models (Model-Based agent, Model-Free agent) and human subjects. Provided by the research team.

Goals and Uncertainty: The Prefrontal Cortex Stores Them Separately

The research team focused on the fact that while traditional reinforcement learning models lose learning stability when goals change frequently and struggle to adapt in uncertain environments, humans are able to achieve both simultaneously. To explain this, the team combined functional magnetic resonance imaging (fMRI) experiments, reinforcement learning models, and AI analysis techniques to precisely analyze how the human prefrontal cortex represents information.


The results revealed that the lateral prefrontal cortex in humans possesses a structure that stores 'goal information' and 'uncertainty information' separately (factorized embedding), preventing interference between the two. The more distinct this structure, the more quickly individuals could revise their strategies when goals changed, while maintaining stable judgment even in unstable environments.


The research team explained that this structure is similar to multiplexing in communication technology, which processes multiple signals simultaneously. In other words, the prefrontal cortex operates channels that respond sensitively to changes in goals and channels that separately process environmental uncertainty at the same time.


Meta-Learning: Deciding Not Just "What" but "How" to Learn

A particularly noteworthy aspect of this study is that the role of the prefrontal cortex extends beyond simply executing learning. The researchers confirmed that the prefrontal cortex has a 'meta-learning' channel, which autonomously decides which learning strategy to choose depending on the situation.


This means that the human brain is structured to learn not only "what to learn" but also "how to learn." The research team explained that this meta-learning capability is the fundamental reason why humans can maintain plans in ever-changing environments and flexibly switch strategies when necessary.

[Reading Science] KAIST Finds Clues to AI That Learns Like the Brain From the left, Sangwan Lee, Professor at KAIST, Doyun Sung, PhD candidate, (top) Mattia Rigotti, PhD, IBM AI Research Lab. Courtesy of KAIST

Potential for Brain-Inspired and Safer AI

This research can be expanded to a variety of fields, including the analysis of individual reinforcement and meta-learning abilities, the design of personalized education, cognitive ability assessment, and human-computer interaction (HCI). In particular, applying the information representation structure of the prefrontal cortex to AI could lead to the development of 'safer AI' that better understands human intentions and values and reduces risky decisions.


Professor Sangwan Lee stated, "This study identifies, from an AI perspective, the operating principles of the brain that allow it to flexibly follow changing goals while planning stably," adding, "This principle will become the core foundation for next-generation AI that adapts to change and learns more safely and intelligently, just like humans."


This research was led by KAIST doctoral candidate Yundo Sung as the first author, with Dr. Mattia Rigotti of IBM AI Research as the second author, and Professor Sangwan Lee as the corresponding author. The results were published in the international journal Nature Communications on November 26.


The title of the paper is "Factorized embedding of goal and uncertainty in the lateral prefrontal cortex guides stably flexible learning," and the DOI is 10.1038/s41467-025-66677-w.


This study was supported by the Limit-Challenge R&D Project of the Ministry of Science and ICT.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top