"Next-Generation HBM4, a New Leap Forward for Korea's Semiconductor Industry"
"HBM Could Reach 1,000 Layers"
"The Boundary with System Semiconductors Will Disappear"
"Samsung Must Expand Investment to Prepare for the Post-HBM4 Era"
"China's De
Professor Kim Jeongho from KAIST is being interviewed by Asia Economy on the 25th of last month. Photo by Baek Jongmin Tech Specialist
Professor Kim Jeongho predicts that the future of HBM will go beyond a simple competition of memory capacity and speed, evolving into a system semiconductor that integrates computing functions. He argues that this is a golden opportunity for Korea to expand its overwhelming strengths in the memory sector into system semiconductor competitiveness, thereby widening the gap with latecomers such as China.
Professor Kim emphasized, "The integration of memory and computing functions, starting with HBM4, will become a new inflection point for Korea's semiconductor industry," and added, "We must concentrate national capabilities on developing a new memory-centric computing architecture." He also projected that building a global talent network and establishing an education system specialized in AI will be the ultimate source of competitiveness. Based on Korea's technological superiority in memory, he explained that it is necessary to strengthen software and system integration capabilities to secure a technological edge in the race to dominate HBM, which is the 'uranium' of the AI era.
Professor Kim has led research on HBM since its early development and is recognized as a world authority in the field of memory semiconductors. With a Ph.D. in physics, he has gained attention for his original research on computer architecture and memory systems and has maintained close industry-academia cooperation with major Korean semiconductor companies such as Samsung Electronics and SK hynix.
The following is a Q&A with Professor Kim.
Q. You have emphasized the importance of HBM for a long time. Was there a particular reason?
A. I tend to prefer exploring non-mainstream, fundamental limitations rather than mainstream research that everyone else is doing. Over 20 years ago, I saw that Moore's Law was reaching its limit and that the bottleneck in computer computation would not be the CPU or logic, but the memory bandwidth?the width of the data channel. That's why I began foundational research on HBM. Since 2011, I have participated in HBM product development. Since then, most of my master's and doctoral research has been about HBM design. Even now, ours is the only university lab in the world dedicated to HBM research.
When SK hynix began developing HBM with Nvidia, there was a shortage of research personnel, so our lab joined the project. I was convinced that this would be the technology to define the next 20 years, so I devoted myself to the research. As the AI era arrived, HBM came into the spotlight. Whether for AI or high-performance computing, I have always believed that the ability to process data quickly?memory performance?is key. These days, we are conducting research on HBM 4, 5, 6, and 7 architectures.
Professor Kim Jeongho of KAIST is taking a commemorative photo with his students after receiving the Kang Daewon Award at the Korea Semiconductor Academic Conference held in Jeongseon, Gangwon Province on February 13. The Kang Daewon Award was established to highlight the achievements of the late Dr. Kang Daewon, a world-renowned semiconductor researcher who developed the MOSFET. Photo by Baek Jongmin, Tech Specialist
Q. The Korean government has encouraged companies to focus on system semiconductors and foundries. Yet, memory semiconductors have emerged as the key chips supporting GPUs, and we are even struggling to secure GPUs that use the HBM we developed. Why are memory semiconductors so important in the AI era, and what opportunities does this present for Korea?
A. For AI computation, especially in training and inference of large language models (LLMs), it is essential to supply massive amounts of data rapidly to GPU cores and receive the results. Here, the memory's bandwidth and capacity?which serve as the data channel?become the bottleneck that determines the overall system performance. No matter how good the GPU is, it cannot perform at its best without sufficient memory support.
HBM is a technology developed precisely to solve this bottleneck. By stacking DRAM vertically and placing it very close to the GPU, it dramatically reduces the distance and time for data movement. I describe HBM as the 'uranium' of the AI era, meaning it is the core fuel that powers the nuclear power plant called AI.
Recently, it has become popular to create Ghibli Studio-style profile pictures using ChatGPT. This requires not only powerful GPUs but also enormous amounts of memory. As we move into an era where AI produces videos, I believe memory usage will increase even further.
Korea possesses world-class technology and production capacity in both DRAM and NAND flash. This overwhelming strength in the memory sector is Korea's most powerful weapon in the AI era. How we leverage this advantage will determine the future of Korea's semiconductor industry.
HBM also plays a role in creating Ghibli Studio-style illustrations using ChatGPT. ChatGPT drew Professor Kim and his students in the Ghibli Studio style. Professor Kim does not focus solely on semiconductors. The student researchers at his lab, Teralab, also conducted analysis work on DeepSeek. Professor Kim's research team is making efforts to enhance HBM design using AI technology. Photo by Teralab
The Future of HBM: Evolution into System Semiconductors
Q. How do you see HBM technology evolving in the future? Will there be changes beyond simply stacking DRAM higher?
A. HBM will continue to evolve. Currently, 12 layers are being discussed, and soon it will be 16 layers. Ultimately, I believe it will be possible to stack up to 1,000 layers. Research is also underway on ways to maximize capacity by stacking NAND flash right next to DRAM, much like a warehouse building next to an officetel.
The more significant change is that HBM itself is becoming a system semiconductor. Starting with HBM4, logic dies?that is, computing functions?are being added to the bottom layer (the base die). This allows some data processing (Processing-in-Memory, PIM) or data management to be performed inside the HBM before sending data to the GPU.
By the time we reach HBM 7, I am proposing a structure in which CPU and GPU cores are directly integrated into the bottom one or two layers of HBM. In this way, HBM will no longer be just memory; it will become a computing system in itself. The boundary between memory and system semiconductors will disappear. This is a golden opportunity to expand Korea's strength in memory into system-level competitiveness.
Q. Samsung Electronics appears to be chasing SK hynix in the HBM market. What efforts are needed for Samsung to regain leadership, especially given China's aggressive pursuit?
A. HBM4 is extremely important for Samsung. They must deliver a successful product here. If they fall behind, the gap could widen for several years. Chinese companies are rapidly increasing their market share in the general-purpose DRAM market, such as DDR4, and are quickly catching up in NAND flash stacking technology as well.
If Korea fails to secure overwhelming leadership in high-value-added, cutting-edge technologies like HBM, the entire Korean memory industry could be threatened. If Samsung has so far approached HBM as an extension of traditional DRAM, focusing on low cost and low power, it is time to change. HBM is a core component of AI systems, so even if it costs more, the focus must be on maximizing performance and reliability.
Fortunately, Samsung has decisive strengths. It possesses not only world-class HBM design and manufacturing technology, but also state-of-the-art foundry capabilities. As HBM evolves into a system semiconductor that integrates computing functions, Samsung is the only company that can independently design the necessary logic dies and compute cores, integrate them with its own memory technology, and produce everything in-house at its own foundry.
To realize this potential, Samsung must break away from its past success formula and invest boldly in HBM-centric technology and transform its organizational culture. It needs a clear vision to move from memory leadership to system-integrated solutions and prepare for the post-HBM4 market.
It is time to move away from the memory of manufacturing DRAM to support Intel CPUs used in PCs and servers, and shift focus to HBM for GPUs.
China's Pursuit: Overcoming Hardware Limitations with Software
Q. Chinese semiconductor companies are rapidly catching up. Where do you see their competitiveness and threat coming from?
A. About five years ago, I went to give a lecture at Huawei in Shenzhen. The entire city felt like Huawei Town, and the energy and passion of the young employees were remarkable. It reminded me of Samsung Electronics in Suwon in the 1980s. Their questions were high-level, and their eyes were full of life. The culture of working overnight on field beds was still there. The efforts of such talent, combined with full government support, are the source of Chinese companies' strength. The effect of recruiting talent from Korea and elsewhere is also a factor.
The most impressive aspect of China's recent AI technology is its ability to overcome hardware limitations imposed by US sanctions through software and creative approaches. For example, models like DeepSeek optimize software so that the GPU performs other computations while waiting for data. They compensate for hardware weaknesses with software, as the saying goes, "If you don't have teeth, use your gums."
China optimizes software to ensure that GPUs are never idle. If recalculating is faster than storing data in memory, they do so. This is the "if you don't have teeth, use your gums" approach.
As seen in cases where personal Nvidia GPU memory is modified, their understanding and application of hardware have also improved significantly. I believe the combination of software capability and hardware application is China's greatest threat.
The Limits of GPUs and Memory-Centric Computing
Q. The importance of GPUs has grown in the AI era, but we have yet to hear of any successful GPU development in Korea. How should this issue be addressed?
A. It is not that Samsung lacks the capability to design and manufacture GPU hardware. I believe it is entirely possible. The real issue is not the hardware itself, but the software ecosystem. Nvidia's dominance in the GPU market is due to its powerful software development platform called CUDA, numerous libraries, high-speed networks supporting GPUs, and an optimized data center operating software stack.
Simply manufacturing GPU chips does not enable AI model training and service. You need an entire software environment that can efficiently connect and manage thousands or tens of thousands of GPUs and allow developers to use them easily. Building this is much more difficult.
Therefore, rather than simply trying to follow Nvidia and make GPUs, it may be more realistic and effective for Korea to pursue a new memory-centric computing architecture, where it already has strengths. This means developing new system structures that maximize memory bandwidth.
In other words, some of the GPU's computational workload can be handled at the memory level, or data movement can be minimized. This approach reduces dependence on the software ecosystem and leverages Korea's technological edge in memory.
▲HBM (High Bandwidth Memory): A 3D packaging technology in which memory chips are stacked vertically to supply data at high speed to processors such as GPUs
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

