본문 바로가기
bar_progress

Text Size

Close

[Independent AI Announcement] Reduced Computation, Outperforming ChatGPT... LG AI Research Unveils 'K-Exaone'

First Announcement of Independent AI Foundation Model
K-Exaone Unveiled, Operable Even on Mid- to Low-Tier GPUs

[Independent AI Announcement] Reduced Computation, Outperforming ChatGPT... LG AI Research Unveils 'K-Exaone' Choi Junggyu, Head of the AI Agent Group at LG AI Research, is presenting at the first announcement event of the "Independent AI Foundation Model" project hosted by the Ministry of Science and ICT on the 30th at COEX in Seoul. Photo by Park Yujin

LG AI Research has unveiled its artificial intelligence (AI) model "K-Exaone," which achieves performance surpassing global frontier models such as ChatGPT from the United States and Qwen from China, while reducing the computational workload for training to one-third of previous levels.


Choi Junggyu, Head of the AI Agent Group at LG AI Research, stated at the first announcement event of the "Independent AI Foundation Model" project hosted by the Ministry of Science and ICT on December 30 at COEX in Seoul, "We have reduced the computational workload required for training to about 30% of previous levels while maintaining performance," adding, "We focused on reducing cost burdens through an efficient architecture."


Choi identified 'computational cost' as a limitation of existing large language models (LLMs). He explained that while calculating all relationships between words in a sentence delivers strong performance, it requires significant costs for both training and operation. He said, "It is possible to selectively utilize only the necessary information without calculating all information," and added, "To achieve this, we efficiently designed a hybrid attention architecture."


This model can operate even in mid- to low-tier GPU environments. Choi explained, "The model can be run without expensive infrastructure, so the initial setup and operational costs are relatively low," and added, "It is an AI model that startups and small to medium-sized enterprises can immediately utilize."


The training method is structured in stages. It first learns general foundational knowledge, then undergoes training to enhance reasoning ability, and finally acquires specialized knowledge in specific fields. Choi explained, "The approach is to improve the next stage's performance based on the knowledge accumulated in the previous stage," and added, "Through this training strategy, we were able to utilize available infrastructure efficiently." In fact, the GPU infrastructure utilization efficiency averaged 89.4%.


Performance evaluation results were also released. According to LG AI Research's self-assessment of 13 key benchmark items, the average performance reached 104% of the target. For models with up to 300 billion parameters, the company explained that K-Exaone achieved top-tier performance compared to global models such as OpenAI's GPT-OSS (120 billion parameters) and Qwen 3 from China (235 billion parameters).


Emphasis was also placed on safety and reliability. LG AI Research conducts compliance reviews on all training data and operates procedures to replace any data that could pose issues with alternative data. The company also stated that it checks the model based on various criteria, including universal human values, social safety, Korean-specific characteristics, and responses to future risks.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top