Naver Cloud Partners with Samsung to Start Customized Production
Easier Mass Production than NVIDIA, Price Competitiveness Expected to Increase
Utilizing Naver LLM 'HyperCLOVA X'
Minister Lee Jong-ho of the Ministry of Science and ICT is listening to Kim Yoo-won, CEO of Naver Cloud, explain the AI semiconductor developed by Naver Cloud and Samsung Electronics at the "2023 AI Semiconductor Future Technology Conference" on the 19th.
Naver is expected to unveil its own artificial intelligence (AI) semiconductor by the first half of 2025 at the latest.
On the 19th, Kim Yuwon, CEO of Naver Cloud, told an Asia Economy reporter at the ‘2023 AI Semiconductor Future Technology Conference’ hosted by the Ministry of Science and ICT, "We will complete a prototype of the AI semiconductor within 1 year and 6 months at the latest." Naver Cloud is developing a domestic AI semiconductor in collaboration with Samsung Electronics.
Kim added, "We will first apply it to Naver Cloud and also enter the business-to-business (B2B) sales market," but noted that specific mass production timing and sales methods need to be discussed with their partner Samsung.
The chip being developed by Naver Cloud is an FPGA (Field Programmable Gate Array) programmable according to its purpose, specialized in ‘inference,’ which is the logical conclusion drawn based on data input into an AI model. While NVIDIA’s AI semiconductor chips excel in both training and inference types, Naver Cloud focused on inference from the start, considering compatibility with its own large language model (LLM), ‘HyperCLOVA X.’
Minister Lee Jong-ho of the Ministry of Science and ICT showed interest in the operation of Naver HyperCLOVA X on the FPGA board for verification revealed that day. Even though the prototype development is not yet complete, Naver Cloud easily implemented the model.
The form of this AI semiconductor is LPDDR (Low Power DRAM). Lee Dongsu, director of Naver Cloud, explained, "NVIDIA’s HBM (High Bandwidth Memory) can be likened to hand embroidery, done stitch by stitch," adding, "In comparison, LPDDR can be mass-produced, so it will have price competitiveness." There is an expectation that it can effectively replace HBM amid the global semiconductor supply shortage.
In particular, they achieved results in lightweight technology that reduces the size of AI models for operation. They compressed the large language model (LLM) and also improved computational performance. In terms of power efficiency, they report that it can be improved more than eightfold compared to other companies’ solutions at the same performance level. While the general semiconductor industry worries about performance degradation of language models during compression, Naver Cloud was relatively free from such risks because it owns its own language model, which allowed them to take on this new challenge. It is also expected to contribute to lowering data center operating costs. Data centers are called ‘electricity guzzlers’ as 70-80% of their operating costs are electricity expenses.
The semiconductor industry expects the global market size to surpass $1 trillion (about 1,308 trillion KRW) by 2030. Global consulting firm McKinsey predicted this in its semiconductor industry outlook report. Kim Hyungjun, head of the Next-Generation Intelligent Semiconductor Business Group, expressed optimism at the event, saying, "Although it was originally forecast for 2030, the $1 trillion milestone is expected to be reached earlier, by 2027," showing high expectations for the semiconductor market development driven by the adoption of ChatGPT and AI technology.
Meanwhile, 25 companies including AI semiconductor and cloud firms attended the event that day.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


