HBM Status Still High Amid 'Unusual Trends'
AI Accelerators Without HBM Set for Consecutive Launches
Changes Begin in 'Inference'... Expansion to Training Draws Attention
HBM Faces Persistent Limits Like High Cost
SK Hynix and Others Adjust Direction
Experts Say "It Will Take Time, But Alternatives Will Emerge"
In the era of artificial intelligence (AI), an unusual trend is emerging in the growth of high-bandwidth memory (HBM). While the demand for AI accelerators continues to rise and HBM's position remains strong, efforts to develop AI chips without HBM are accelerating, leading to speculation that a movement to move away from HBM may be gaining momentum. It is expected that this wave of change could begin as early as the end of this year.
SK Group Chairman Chey Tae-won is inspecting the HBM production site at SK Hynix Icheon Campus in Icheon-si, Gyeonggi-do on the 5th. [Image source=Yonhap News]
According to semiconductor industry sources and foreign media on the 19th, new AI accelerators that do not use HBM are expected to be released domestically and internationally between the end of this year and the first half of next year. In South Korea, the AI accelerator ‘Maha-1,’ jointly developed by Samsung Electronics and Naver, is expected to undergo performance testing within this year. If development proceeds smoothly, the launch and full-scale commercialization are likely to take place early next year. Naver is designing the core software (SW) for Maha-1, while Samsung Electronics is responsible for the chip’s design and production. Maha-1 uses low-power DRAM (LPDDR) instead of HBM. LPDDR consumes less power and is cheaper than HBM.
Overseas, Canadian semiconductor startup Tenstorrent plans to commercialize its AI chip ‘Blackhole’ by the end of this year. Recently, Tenstorrent succeeded in developing Blackhole and is currently producing it through Taiwan’s TSMC. Blackhole is praised for overcoming the limitations of its previous version, ‘Wormhole,’ which only reached about 30% of NVIDIA’s product performance, while also reducing power consumption. It is attracting attention as a new product that could threaten NVIDIA. Building on Blackhole, Tenstorrent plans to release the AI chiplet ‘Quasar’ next year. Jim Keller, CEO of Tenstorrent, is known as a prominent figure who points out the high cost of HBM and considers it inefficient. Tenstorrent has used graphics DRAM (GDDR6) in its products. In addition to these, startups are actively challenging the market. Korean fabless semiconductor company PuriosaAI plans to launch the edge device NPU ‘Renegade S’ equipped with LPDDR in the upcoming fourth quarter.
Most AI accelerators currently under development by companies like Maha-1 are designed for ‘inference.’ Only Tenstorrent’s products are being developed as models capable of both training and inference. AI accelerators are divided into ‘training’ and ‘inference’ types, with NVIDIA’s AI accelerators using HBM dominating 90% of the training market.
The movement to reduce HBM usage in inference AI accelerators still has a small impact on the overall market. However, it is considered significant as it represents the first step in changing the market landscape. Professor Kim Jung-ho of KAIST, known as the ‘father of HBM,’ said, "I believe that systems led by HBM will not change for about the next 10 years in training," but added, "For inference, there are many attempts to make it lightweight and low-power without HBM. Some may be able to replace HBM."
"HBM alone is not enough" Growing voices
Nevertheless, the movement to replace HBM has recently become more prominent. The biggest reason is cost. The price per HBM chip, especially the cutting-edge 5th generation, is 7 to 8 times higher than DDR5 used in CPUs. Jim Keller, CEO of Tenstorrent, said in a recent interview, "HBM is certainly excellent, but it falls short in terms of cost efficiency," adding, "The price is too high to continue advancing it." He continued, "Currently, there are efforts in the market to lower HBM prices by using alternatives or to adopt more cost-efficient technologies."
This is related to the complexity of the HBM manufacturing process. HBM is made by stacking DRAM vertically and undergoing a chemical mechanical planarization (CMP) process to flatten the wafer surface. The CMP process flattens the DRAM surface and minimizes chip thickness. This process requires significant manpower, companies, and equipment, which ultimately drives up the unit cost of HBM.
It is also noteworthy that SK Hynix, the world’s largest producer and seller of HBM, has hinted at a change in direction. Earlier this month, SK Hynix held a Future Forum at its headquarters in Icheon, Gyeonggi Province, with ‘Post-HBM’ as the agenda. The forum was held with the purpose of exploring ways to maintain the company’s memory market dominance, enhance product value, and lead the AI era together with internal and external experts after HBM.
SK Hynix President Kwak No-jung said, "As AI develops and accelerates in earnest, we thought the future would become clearer and more predictable, but it has become much more ambiguous and difficult to forecast," adding, "We are now in a situation where we need to broadly consider and discuss how to prepare for the future based on various scenarios." This implies that it is time to prepare for what comes after HBM.
The concept of ‘Post-HBM’ is also emerging in Silicon Valley. Chip makers like AMD and semiconductor design company ARM are known to be challenging the creation of AI chips that do not require HBM.
"HBM alternatives will gradually emerge"
The outlook for HBM in the semiconductor market remains optimistic. In June, U.S. securities firm Bernstein forecasted that the global HBM market size will more than double next year to about $25 billion (approximately 33 trillion KRW) based on sales.
Despite its high cost, companies use HBM because there is currently no product that can replace it. If a product that is cheaper and offers performance exceeding HBM emerges, companies’ choices could change significantly. Bum Jin-wook, professor of electronic engineering at Sogang University, said, "As HBM matures, various solutions are expected to be developed, but there is also a possibility that HBM alternatives will gradually be created," adding, "In the camp competing with NVIDIA, there are also Compute Express Link (CXL) and Processing In Memory (PIM). Since these are not yet standardized, it will take quite some time before they can replace HBM."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![7~8 Times More Expensive High-End Memory... The 'De-HBM' Trend Blowing in Semiconductors [Post-HBM Launch]①](https://cphoto.asiae.co.kr/listimglink/1/2024091910290032055_1726709339.jpg)
![7~8 Times More Expensive High-End Memory... The 'De-HBM' Trend Blowing in Semiconductors [Post-HBM Launch]①](https://cphoto.asiae.co.kr/listimglink/1/2023110800592544380_1699372765.jpg)
![7~8 Times More Expensive High-End Memory... The 'De-HBM' Trend Blowing in Semiconductors [Post-HBM Launch]①](https://cphoto.asiae.co.kr/listimglink/1/2024091910293032056_1726709369.jpg)
![7~8 Times More Expensive High-End Memory... The 'De-HBM' Trend Blowing in Semiconductors [Post-HBM Launch]①](https://cphoto.asiae.co.kr/listimglink/1/2021051111055529920_1620698755.jpg)

