본문 바로가기
bar_progress

Text Size

Close

LG Also Spent 7 Billion on AI Development, AI Industry Closely Chasing DeepSeek

AI Industry Competitiveness Review Meeting
LG ExaOne Development Cost 1 Billion KRW Less Than DeepSeek
Consensus Emerges: "Low-Cost Models Also Show Potential"

Since the launch of DeepSeek in China, competition among 'low-cost, high-performance' artificial intelligence (AI) models is expected to intensify. In South Korea, LG attracted attention by announcing that it developed an AI model with less money than DeepSeek.


On the morning of the 6th, Baek Gung-hoon, head of LG AI Research, attended the 'Domestic AI Industry Competitiveness Diagnosis and Review Meeting' held at the National AI Committee in Jung-gu, Seoul, and stated, "This story is being revealed here for the first time," adding, "The 'ExaOne 3.5' 32B model, which was unveiled last December, cost 7 billion KRW." This is lower than the reported $5.57 million (approximately 8.06758 billion KRW) spent on training DeepSeek's 'V3' model.


ExaOne 3.5 refers to an AI model used by all employees of LG affiliates. It can process a long text equivalent to 100 pages of A4 paper at once and offers features such as customized prompt recommendations by job type and complex data analysis, making it specialized for various tasks. It was also recognized for its performance by ranking first in the 'Open LLM Leaderboard,' the world's largest AI platform Hugging Face's evaluation of large language models (LLM), last December.


LG Also Spent 7 Billion on AI Development, AI Industry Closely Chasing DeepSeek Yonhap News

Baek explained the development process, saying, "We used NVIDIA's graphics processing unit (GPU) 'H100' for four months," and added, "The expert mixture (MoE) technique, which is considered a key factor in DeepSeek's success in low-cost development, was used." The MoE technique selectively activates only specific models suitable for problem-solving, reducing the amount of computation required for training and inference.


He also expressed regret, saying, "If it had been publicly released globally beyond the (LG) group level, we would have promoted it better," and stated, "We will soon introduce an AI model at the level of DeepSeek's 'R1' and release it as open source."


LG Also Spent 7 Billion on AI Development, AI Industry Closely Chasing DeepSeek Actual usage of 'Chat Exaone' created based on LG's ultra-large AI 'Exaone'. Photo by LG

At the meeting, there was a consensus that the potential of 'low-cost, high-performance' AI models has become apparent since the launch of DeepSeek. Kim Doo-hyun, a professor in the Department of Computer Engineering at Konkuk University, said, "OpenAI was not an 'insurmountable' (neomsa-byeok) wall," adding, "I thought OpenAI is not the only solution, and models around the R1 level can also compete." Kim Seong-hoon, CEO of Upstage, which developed the open-source model 'Sola,' said, "When OpenAI's inference model 'o1' was released, it was clear that it was a path toward artificial general intelligence (AGI), but it was so expensive that it was difficult to keep up," and added, "DeepSeek shows that development can be done in various ways, which I find very hopeful."


As DeepSeek is known to have developed R1 using NVIDIA's low-end GPU 'H800,' voices also emerged that inference models can be developed with low-performance computing infrastructure. Jo Gang-won, CEO of More, an AI infrastructure solutions company, explained, "We only have one or two NVIDIA GPUs," and said, "Even without using NVIDIA's high-end GPUs, if the technology is well-equipped, it is possible with alternative semiconductors."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top