본문 바로가기
bar_progress

Text Size

Close

[Chip Talk] Next-Generation Memory 'SOCAMM' Rises... Fierce Three-Way Race Among Samsung, SK, and Micron

Nvidia Rubin to Fully Adopt SOCAMM2
From Micron's Monopoly to a Three-Way Race
Korean Companies Emphasize Supply Stability
A New Memory Pillar Emerges Amid AI Boom

As competition in artificial intelligence (AI) development intensifies, a new market called 'SOCAMM' is emerging. SOCAMM is a next-generation memory module developed by Nvidia as a proprietary standard. The competition among the three major memory companies is heating up over the low-power double data rate (DDR) SOCAMM module, which will be installed in Nvidia's next-generation AI accelerator 'Rubin' set to launch in the first half of next year. In particular, as Micron's exclusive position is being challenged, opportunities have also opened up for Samsung Electronics and SK hynix.


According to industry sources on November 17, Micron announced that it shipped samples of SOCAMM2 to its customers on October 22 (local time). Among the three major memory companies, Micron was the first to disclose the shipment of SOCAMM2. On the same day, Samsung Electronics and SK hynix unveiled the actual SOCAMM2 modules and their specifications at the 27th Semiconductor Exhibition (SEDEX 2025), signaling their intent to compete.


'High Performance at a Reasonable Price': SOCAMM Delivers Both Efficiency and Cost-Effectiveness
[Chip Talk] Next-Generation Memory 'SOCAMM' Rises... Fierce Three-Way Race Among Samsung, SK, and Micron SOCAMM2 unveiled by Micron. Micron.

Unlike high bandwidth memory (HBM), which handles high-performance computing for graphics processing units (GPUs), SOCAMM serves as a 'sub-memory' that expands the AI GPU memory pool. It is considered a rational alternative in terms of cost-effectiveness, as it enhances efficiency while reducing cost burdens. SOCAMM is not only efficient but also cost-effective. It can deliver high performance at just 25-33% of the price of HBM. Although it does not offer high bandwidth, its large capacity makes it highly useful for mid-range models such as AI inference, personal AI PCs, and enterprise AI servers. Both HBM and SOCAMM can be installed together in a single AI accelerator.


SOCAMM also has strengths in power efficiency. Micron stated that its SOCAMM module reduces power consumption to about one-third compared to conventional server memory modules (RDIMM). Samsung Electronics introduced a SOCAMM module that improves power efficiency by more than 45% compared to DDR5-based DIMMs. The lower price compared to HBM highlights the product line's value proposition of 'cost-effectiveness for high performance.'


Standardization of SOCAMM2 is currently underway. The Joint Electron Device Engineering Council (JEDEC), under the Electronic Industries Alliance (EIA) of the United States, is expected to officially announce the international standardization of SOCAMM2 soon, following board approval procedures.


Nvidia Shifts to SOCAMM2... Samsung and SK Join the Race
[Chip Talk] Next-Generation Memory 'SOCAMM' Rises... Fierce Three-Way Race Among Samsung, SK, and Micron

Nvidia plans to equip its next-generation central processing unit (CPU) 'Vera,' scheduled for release next year, with SOCAMM. Vera will serve as the brain in Nvidia's next-generation AI accelerator Rubin, which will also launch next year. Nvidia initially planned to use SOCAMM in its 'GB300' series of AI accelerators slated for release this year, but postponed mass orders to next year due to technical issues. At the same time, the company decided to adopt SOCAMM2 (second-generation SOCAMM) instead of the original SOCAMM1 (first-generation SOCAMM).


Previously, Nvidia had planned to source SOCAMM1 almost exclusively from Micron. However, with the switch to SOCAMM2, the company allocated orders to Samsung Electronics and SK hynix as well. This shifted the landscape from Micron's dominant position to a 'three-way competition' where Samsung Electronics, SK hynix, and Micron are starting from the same line. This means Samsung Electronics and SK hynix now have an opportunity to supply large volumes as well.


According to the specifications released by each company, Micron's SOCAMM2 offers a capacity of 192GB and supports speeds of up to 9.6Gbps. Samsung Electronics showcased modules supporting 8.5Gbps, while SK hynix exhibited modules ranging from 7.5Gbps up to 9.6Gbps. Operating speed is a crucial factor in reducing bottlenecks in AI servers, but as speed increases, manufacturing complexity and cost burdens also rise, making 'yield stabilization' a key challenge for each company.


Industry observers believe that all three companies have completed preparations for mass production of SOCAMM2 at a similar level. While Micron led the way by developing a module based on LPDDR5X, Samsung and SK hynix have also responded quickly, leveraging their experience with LPDDR processes and production capabilities. In particular, Samsung Electronics is emphasizing its industry-leading production capacity and supply stability. SK hynix has also announced plans to supply customers within the year.


[Chip Talk] Next-Generation Memory 'SOCAMM' Rises... Fierce Three-Way Race Among Samsung, SK, and Micron
Will a Second 'HBM' Market Emerge?

There are expectations that once SOCAMM2 enters full-scale mass production, it will emerge as a new pillar in memory for AI and data centers. According to market research firm Market Research Intelligence, the global low-power DRAM market-including SOCAMM-is projected to grow at an average annual rate of 8.1% from 2026 to 2033, reaching $25.8 billion (about 35.5 trillion won).


Park Kangho, a researcher at Daishin Securities, stated in a report, "Next year, Nvidia will drive demand for next-generation semiconductors," adding, "Next-generation modules like SOCAMM will also have a technological impact on the market."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top