본문 바로가기
bar_progress

Text Size

Close

[Peace&Chips] '3 Billion GB Demand This Year'... Eyes on Core Chips for AI Servers

Despite Semiconductor Market Slump, Positive Outlook for HBM Continues
HBM Prices 3-5 Times Higher Than DDR5 DRAM
HBM Capacity Used in GPUs to Increase by 60% This Year

Editor's NoteSemiconductors, known as the rice of modern industry. Although it's a term we hear every day, it’s often hard to explain. Peace & Chips will make the complex concepts and overall trends of the semiconductor industry easy to digest. Just bring your spoon.
[Peace&Chips] '3 Billion GB Demand This Year'... Eyes on Core Chips for AI Servers

"The more we rely on high-performance artificial intelligence (AI) servers, the greater the need for high-performance AI semiconductors. Between 2023 and 2024, not only will demand for high-bandwidth memory (HBM) increase, but the capacity for advanced packaging (which combines HBM memory with central processing units (CPU), graphics processing units (GPU), etc.) could grow by 30-40% in 2024."


Market research firm TrendForce revealed this in a report on the 21st. The forecast was made as demand for generative AI surges, accelerating AI server development not only among major cloud service providers (CSPs) such as Microsoft (MS), Google, and Amazon Web Services (AWS), but also Chinese companies like Baidu and ByteDance.


If you have an interest in the semiconductor market, you’ve likely heard of HBM several times recently. Semiconductors are divided into memory semiconductors (storage) and system semiconductors (processing). HBM is a type of DRAM memory semiconductor, which loses stored data when powered off. It stacks multiple DRAM chips vertically to increase data transfer speeds.


[Peace&Chips] '3 Billion GB Demand This Year'... Eyes on Core Chips for AI Servers

HBM’s faster data transfer speed compared to regular DRAM makes it well-suited for AI servers. This is why it is essential alongside GPUs in AI servers, where large-scale data processing is critical. This is also why Samsung Electronics and SK Hynix, leaders in the global memory market, are heavily investing in HBM product development.


TrendForce expects that the amount of HBM included in AI servers equipped with NVIDIA, AMD GPUs, and Google Tensor Processing Units (TPUs) will reach 290 million gigabytes (GB) this year alone, a 60% increase compared to last year. They also predict an additional 30% growth next year. Truly a rosy outlook.


However, there are cautious views regarding the HBM market. Despite the hype, the market has not fully formed. The semiconductor industry estimates that the HBM market size is around 1% of the total DRAM market. Considering last year’s DRAM market size was about $80 billion, the HBM market is estimated to be around $800 million.


[Peace&Chips] '3 Billion GB Demand This Year'... Eyes on Core Chips for AI Servers

Nonetheless, memory companies present HBM as a key keyword for next-generation memory. Kyung Kye-hyun, President of Samsung Electronics’ Device Solutions (DS) Division, stated in an employee communication channel (WeTalk) this month on the topic of the AI era, "As the AI era arrives, the importance of semiconductors that enhance AI performance and efficiency will increase, making HBM crucial."


Industry insiders predict that major players like Samsung Electronics and SK Hynix will focus on expanding their business in the HBM market. Samsung Electronics has recently been busy filing trademarks for several HBM brand names ending with 'Bolt' at the Korean Intellectual Property Office. They have also announced plans to launch the fifth-generation HBM product, HBMP, in the second half of the year.


Dongwon Kim, a researcher at KB Securities, forecasted, "Samsung Electronics will begin full-scale supply of HBM3 (the fourth-generation HBM product) to North American GPU companies starting in Q4. The share of HBM3 in Samsung’s DRAM sales is expected to grow from 6% this year to 18% next year."


[Peace&Chips] '3 Billion GB Demand This Year'... Eyes on Core Chips for AI Servers Infographic illustrating the roles of the GPU and HBM (memory). The GPU continuously fetches parts of the artificial neural network and data stored in the memory to perform computations (training and inference), repeatedly storing intermediate outputs and final results back into the memory. [Image and description source=SK Hynix Newsroom]

SK Hynix, which first developed HBM globally in 2013 and established its market presence, is also expanding its business by releasing samples of HBM3E (the fifth-generation HBM product) in the second half of the year. At every opportunity to introduce next-generation memory, including the recent U.S. IT exhibition 'HPE Discover 2023,' SK Hynix has showcased its HBM products to demonstrate its technological prowess.


Last month, SK Hynix announced that it had developed the world’s first highest-capacity 24GB HBM3 product and provided samples to customers including AMD. Roman Kirichinsky, Vice President and General Manager of AMD’s Memory Products Division, expressed appreciation for SK Hynix’s continuous efforts in developing new HBM memory.


Currently, HBM is priced 3 to 5 times higher than server-grade Double Data Rate (DDR) 5 DRAM, which is already a high-performance product. The industry expects that as HBM demand and supply increase, prices will drop, enabling the market to open up in earnest. We should look forward to the HBM market that domestic companies will lead.


[Peace&Chips] '3 Billion GB Demand This Year'... Eyes on Core Chips for AI Servers

This article is from [Peace & Chips], published weekly by Asia Economy. Click subscribe to receive articles for free.


☞Subscribe


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top