본문 바로가기
bar_progress

Text Size

Close

Naver AI Executives Head to the US in Large Numbers... Intel and Anti-Nvidia Alliance

Naver Collaborates with Samsung Electronics and Intel
Shaking Up NVIDIA's Market and Targeting Inference Sector

Key executives in charge of artificial intelligence (AI) at Naver have departed for the United States to collaborate with Intel. This move is seen as a serious effort to shake up the semiconductor market dominated by Nvidia and shift the focus toward AI service-centered platforms.


According to the IT industry on the 10th, key AI personnel including Lee Dongsu, Director of Naver Cloud Hyperscale AI, Ha Jungwoo, Head of Naver Future AI Center, and Kwon Sejung, Leader of Naver Cloud, recently headed to Phoenix, Arizona, where ‘Intel Vision 2024’ was held. Intel hosted a two-day technology conference starting on the 8th (local time) under the theme ‘Bringing AI Everywhere.’ It is reported that representatives from both companies finalized their collaboration during the conference.

Naver AI Executives Head to the US in Large Numbers... Intel and Anti-Nvidia Alliance Naver Headquarters in Seongnam, Gyeonggi Province. Photo by Jinhyung Kang aymsdream@

The two companies are expected to announce related details soon. A likely plan is to jointly develop software based on Intel’s AI-dedicated chip ‘Gaudi.’ This approach combines hardware and software technologies, similar to how Nvidia has dominated the market by integrating its proprietary AI chips with the AI development platform ‘CUDA.’


Naver’s partnerships are not limited to Intel. It is also developing the AI inference chip ‘Maha1’ with Samsung Electronics. Samsung is responsible for chip design and production, while Naver handles core software design. Earlier, Kyung Kyehyun, President of Samsung Electronics’ Device Solutions (DS) Division, introduced Maha1 as “a product that reduces data bottlenecks (latency) to one-eighth and improves power efficiency by eight times.” Performance verification and stabilization tests are scheduled for this year.


Naver is gaining attention in the AI semiconductor market because of its deep understanding of AI. Naver developed the world’s third-largest large language model (LLM) and provides AI services based on it. It has a large user base not only in Korea but also in Japan and Southeast Asia. Such experience is essential for Samsung Electronics and Intel to develop chips optimized for AI. Kim Yangpaeng, a senior researcher at the Korea Institute for Industrial Economics and Trade, said, “Nvidia’s chips are essentially general-purpose products, and in the future, chips with specific functions for specific fields will be needed. The specifications for these are best known by the companies using the chips.”

Naver AI Executives Head to the US in Large Numbers... Intel and Anti-Nvidia Alliance [Image source=Yonhap News]

Naver also needs customized chips to expand its AI services. It has determined that inference semiconductors are suitable for providing services to many users without delay. Unlike training semiconductors, which handle large volumes of data and require high-performance computation, inference semiconductors prioritize speed and cost. However, most IT companies, including Naver, currently use Nvidia’s chips without distinguishing between training and inference due to a lack of viable alternatives.


The core axis of the AI semiconductor market is shifting from training to inference. The inference semiconductor market is growing as demand for AI services increases. According to market research firm Omdia, the inference semiconductor market size is expected to grow from $6 billion (approximately 8 trillion KRW) last year to $143 billion (approximately 194 trillion KRW) by 2030. Some analyses predict that by 2025, inference semiconductors will account for 78% of AI semiconductors, three times the 22% share of training semiconductors. Kim Hyungjun, Head of the Next-Generation Intelligent Semiconductor Project Group, said, “For AI popularization, inference semiconductors that reduce power consumption and cost must expand. Both on-device and server applications will move toward inference.”


There is also an intention to build an anti-Nvidia front. The more AI service providers depend on Nvidia, the weaker their purchasing negotiation power becomes, and AI development itself becomes dependent. This is because they must develop technologies suited to the chip rather than software optimized for their own models. An industry insider explained, “Even if AI models are lightened through software technology, chips that do not consider this can actually slow down performance. Nvidia claims chip performance has improved according to its own standards, but actual application results vary widely.”


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top