본문 바로가기
bar_progress

Text Size

Close

"Learning How to Learn" KAIST Finds the 'Key' to Brain-Inspired Artificial Intelligence Learning

The human brain begins learning through spontaneous random activity even before acquiring external sensory information. Within the flow of the unconscious, the brain itself accumulates necessary information, enabling diverse learning according to different situations. In contrast, artificial intelligence conducts mechanical learning based on stored data, distinguishing it from the brain's learning patterns.


This has been regarded as the greatest challenge in developing AI that mimics the biological brain (the weight transport problem), and it fundamentally explains why typical artificial neural network learning requires large-scale memory and computational work based on accumulated data, unlike the biological brain.


"Learning How to Learn" KAIST Finds the 'Key' to Brain-Inspired Artificial Intelligence Learning Illustration depicting research on understanding the brain's operating principles through artificial neural networks. Provided by KAIST

On the 23rd, KAIST announced that Professor Baek Se-beom’s research team from the Department of Brain and Cognitive Sciences solved this weight transport problem and explained the principle enabling resource-efficient learning in biological brain neural networks.


The decades-long advancement of artificial intelligence has been based on error backpropagation learning proposed by Geoffrey Hinton, who won this year’s Nobel Prize in Physics.


However, error backpropagation learning has been considered impossible in the biological brain. This is because it requires the unrealistic assumption that individual neurons must know all connection information of the next layer to calculate error signals for learning.


This challenge, called the weight transport problem, was raised by Francis Crick, who won the Nobel Prize in Physiology or Medicine for discovering the DNA structure, after Hinton proposed error backpropagation learning in 1986. It distinguished the fundamental differences between the operating principles of natural neural networks and artificial neural networks.


Since then, researchers including Hinton have repeatedly attempted to create biologically plausible models that can solve the weight transport problem at the intersection of AI and neuroscience and implement the brain’s learning principles.


In 2016, a joint research team from the University of Oxford and DeepMind in the UK first proposed the concept that error backpropagation learning is possible without weight transport, attracting attention in academia.


However, biologically plausible error backpropagation learning without weight transport showed limitations for practical application due to slow learning speed and low accuracy, resulting in inefficiency.


In contrast, the technology developed by Professor Baek’s research team is significant in that it enables fast and accurate learning when actual data is encountered by pre-training random information in brain-inspired artificial neural networks.


"Learning How to Learn" KAIST Finds the 'Key' to Brain-Inspired Artificial Intelligence Learning (From left) Professor Baek Se-bum, Department of Brain and Cognitive Sciences, Professor Lee Sang-wan, Master’s student Cheon Jeong-hwan, provided by KAIST

Before the study, Professor Baek’s team focused on the fact that the biological brain already begins learning through spontaneous random neural activity internally before external sensory experiences.


Inspired by this, they pre-trained meaningless random noise on a biologically plausible neural network without weight transport and confirmed that it can create the symmetry of forward and backward neural connections in the network, which is an essential condition for error backpropagation learning. This means learning is possible without weight transport through random pre-training.


Professor Baek’s team revealed that learning random information before actual data learning has the property of ‘meta learning’ ? learning how to learn ? and confirmed that neural networks pre-trained with random information perform much faster and more accurate learning when exposed to actual data, improving learning efficiency without weight transport.


The research team expects this to be a breakthrough for future brain-based AI and neuromorphic computing technology development.


Professor Baek said, “The research team broke the conventional wisdom of machine (data) learning and presented a new perspective by focusing on brain neuroscience principles that create appropriate conditions even before learning. The main achievement of this study is not only solving an important problem in artificial neural network learning based on clues from developmental neuroscience but also providing insights into the brain’s learning principles through artificial neural network models.”


Meanwhile, this research was conducted with support from the Basic Science Research Program of the National Research Foundation of Korea, the Talent Training Program of the Institute for Information & Communications Technology Planning & Evaluation, and the KAIST Singularity Professorship Program.


The study involved Jeon Jeong-hwan, a master’s student (first author) from the Department of Brain and Cognitive Sciences at KAIST, and Professor Lee Sang-wan (co-author). The research results (paper) will be presented at the 38th Conference on Neural Information Processing Systems (NeurIPS) to be held from December 10 to 15 this year in Vancouver, Canada.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top