본문 바로가기
bar_progress

Text Size

Close

[News Terms] The AI Semiconductor 'PIM' Backed by the Government: What Kind of Technology Is It?

Memory Semiconductor and System Semiconductor Convergence
Preventing Data Delay Issues for AI Computing Specialization

[News Terms] The AI Semiconductor 'PIM' Backed by the Government: What Kind of Technology Is It? Samsung Electronics unveiled the world's first PIM (Processing-in-Memory) last year / Photo by Yonhap News

[Asia Economy Reporter Lim Juhyung] On the 12th, the Ministry of Science and ICT introduced the domestic AI semiconductor technology called Artificial Intelligence (AI) semiconductor 'PIM (Processing-in-Memory)', a completely new type of computer chip aimed at making Korea's AI semiconductor technology the world's best by 2030. It is a method of creating chips specialized for AI program computation by fusing system semiconductors with memory semiconductors, which are the strength of the Korean semiconductor industry.


PIM is a hybrid semiconductor that combines two semiconductors. It merges a memory semiconductor, which is a device that stores data, with a 'processor' that handles program computations. Why has this type of semiconductor emerged as a 'game changer' in the AI era? To understand this, one must first know how computers process AI.


AI-Dedicated Semiconductor Combining Processor with Memory

Today, AI is processed through a 'computing system' that integrates various semiconductors. This mainly includes system semiconductors such as AI accelerators and central processing units (CPUs), and memory semiconductors such as DRAM and HBM. AI accelerators are further divided into graphics processing units (GPUs) and other AI-specialized semiconductors.


[News Terms] The AI Semiconductor 'PIM' Backed by the Government: What Kind of Technology Is It? Systems that process AI are generally composed in the form of supercomputers integrating various semiconductors. Photo of the AI data center to be established in Gwangju / Photo by Yonhap News

System semiconductors like GPUs and CPUs process AI programs. Meanwhile, modern neural network AI improves efficiency by repeatedly self-learning from massive data, requiring space to store this data. This is similar to how the human brain has short- and long-term memory spaces. Therefore, AI computers need both processors and memory simultaneously.


Until now, AI accelerator developers have addressed data issues by connecting memory 'beside' system semiconductors. For example, the world's largest GPU designer, Nvidia, provides computing systems that combine GPUs with HBM. However, this approach intensifies the 'latency'?the time it takes for data stored in memory to be transferred to the processor for computation. The longer the latency, the lower the efficiency of AI training.


[News Terms] The AI Semiconductor 'PIM' Backed by the Government: What Kind of Technology Is It? Internal view of the A100 DGX Pod from Nvidia, the world's largest GPU company. To store massive AI data, it is equipped with 1 terabyte (TB) of RAM separate from the GPU. / Photo by Nvidia Official Website Capture

PIM is gaining attention as a solution to the latency problem. By integrating small computational units within memory, the computer chip can perform data storage and program processing simultaneously. Although it has less storage capacity than pure memory devices and lower computational power than pure processors, it has the advantage that a single chip performs all tasks required for AI computation.


Korea, World Leader in Memory, Also a Front-runner in PIM

Domestic companies that have accumulated world-class memory semiconductor technology are also front-runners in the PIM development race. Samsung Electronics unveiled the world's first 'HBM-PIM' last year, integrating AI computational units into HBM. SK Hynix also attracted industry attention in June by releasing a new concept semiconductor called 'GDDR6-AiM', which mounts a graphics acceleration unit on DRAM.


However, AI-specialized semiconductors including PIM are fields where various overseas startups are already active, so fierce competition is expected. Leading technology companies such as the UK’s Graphcore and the US’s Cerebras have been developing parallel computing processors that integrate small memory devices into AI computation engines since the mid-2010s. These semiconductors have been evaluated by some as similar to PIM.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top