본문 바로가기
bar_progress

Text Size

Close

National Pukyong University AI Research Institute Wins 'BEST PAPER AWARD' from Korea Multimedia Society

The research team at the Artificial Intelligence Research Institute of Pukyong National University (President Bae Sang-hoon) won two 'BEST PAPER AWARDs' at the 2024 Fall Academic Conference Undergraduate Paper Competition of the Korea Multimedia Society.

National Pukyong University AI Research Institute Wins 'BEST PAPER AWARD' from Korea Multimedia Society Research team at the Artificial Intelligence Research Institute, Pukyong National University. Provided by Pukyong National University

At the on-site evaluation held recently at Jeju National University, the team consisting of Pukyong National University AI Research Institute students Joo Sung-wook, Choi Da-nyeong, Jung Ye-chan, senior researcher Kim Chae-gyu, and Dr. Jeong Chi-yoon from the Electronics and Telecommunications Research Institute received the 'BEST PAPER AWARD' for their paper titled "GradF2M: XAI-based Music Generation Method for Visual-Auditory Sensory Substitution." Another team comprising Pukyong National University students Choi Da-nyeong, Jang Ye-chan, Joo Sung-wook, senior researcher Kim Chae-gyu, and Dr. Moon Kyung-duk from the Electronics and Telecommunications Research Institute received the 'BEST PAPER AWARD' for their paper titled "A Study on Emotion Music Generation Method Based on Facial Expression Emotion Extraction Using LSTM and Transformer Models."


These papers focus on sensory substitution technology, which converts information from lost sensory organs into information from other sensory organs and delivers it to the user to enable perception. Both papers proposed a method within the Visual-to-Auditory (V2A) substitution approach that generates emotional music from facial expressions, receiving excellent evaluations.


The paper "GradF2M: XAI-based Music Generation Method for Visual-Auditory Sensory Substitution" proposed a method that extracts high-dimensional emotional information from facial expression images using an emotion classification model and converts this information into auditory information in the form of musical melodies. Additionally, it derived user-understandable information from explainable artificial intelligence (eXplainable AI) and utilized it to adjust auditory information, thereby improving sensory substitution recognition rates.


The paper "A Study on Emotion Music Generation Method Based on Facial Expression Emotion Extraction Using LSTM and Transformer Models" used LSTM (Long Short-Term Memory) and Transformer models to generate emotional music based on emotional melodies extracted from facial expressions. To enhance emotional expressiveness, musical attributes such as tempo and key were adjusted specifically for emotions like joy and sadness. This research highlighted the potential application of emotional music generation not only for humans but also for robots and animals, drawing attention as a contribution to the future development of emotion recognition and interaction technologies.


At the laboratory of senior researcher Kim Chae-gyu at Pukyong National University’s Artificial Intelligence Research Institute, undergraduate students participate in various research projects, gaining opportunities to write program codes, research papers, and patent specifications, thereby accumulating research experience through technology development.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top