본문 바로가기
bar_progress

Text Size

Close

'World's First LPU AI Chip' Achieved by Korean Startup: "2.4 Times Better Performance Than Existing GPUs" [Future of K-Semiconductors ①]

Kim Jooyoung, CEO of HyperAccel
Specialized Chip for Advanced LLM Operations
2.4 Times the Performance of AI GPUs
50% Faster Processing Speed
Primarily Used by Naver Cloud
On-Device Chip Co-Developed with LG Electronics to Launch by Year-End
Aiming for IPO in 2028

The headquarters of HyperAccel is located in Seocho-dong, Seocho-gu, Seoul. Upon entering the office of CEO Kim Jooyoung, a massive whiteboard occupying the entire right wall immediately catches the eye. The whiteboard is filled with all kinds of formulas, shapes, and arithmetic symbols arranged in irregular patterns. HyperAccel employees unleash their imagination on whiteboards placed throughout the company. CEO Kim explained, "I set them up everywhere, inspired by my time at Microsoft from 2010 to 2019," adding, "It's an environment where team members can jot down ideas as soon as they come to mind and organize them later."


'World's First LPU AI Chip' Achieved by Korean Startup: "2.4 Times Better Performance Than Existing GPUs" [Future of K-Semiconductors ①] Kim Jooyoung, CEO of HyperAccel, is being interviewed on the 3rd at HyperAccel in Seocho-gu, Seoul. Photo by Kang Jinhyung

HyperAccel and CEO Kim are preparing to bring the blueprints written on those whiteboards to life in earnest this year. In March, the company will launch the world's first Language Processing Unit (LPU), which it developed independently. This LPU is expected to be used primarily by Naver Cloud. Later in the year, HyperAccel will release an on-device chip co-developed with LG Electronics. CEO Kim stated, "If 2025 was a 'year of development,' this year is a 'year of launch.' For a startup, releasing two chips in a single year is no easy feat, but it will be a meaningful year." Based on these achievements, the company plans to increase sales and aims to go public around 2028.

The industry is particularly focused on the LPU. First conceptualized by HyperAccel in 2023, it is an artificial intelligence (AI) chip. The LPU is a type of Neural Processing Unit (NPU) specialized for optimizing the operation of Large Language Models (LLMs). This enables enhanced AI inference capabilities.


In November of last year, HyperAccel completed all development and design work for the LPU and handed over the schematics to Samsung Electronics Foundry (semiconductor contract manufacturing) to begin physical production. The 4nm (1nm = one-billionth of a meter) process is used in this production. HyperAccel's LPU is evaluated to be 50% faster in processing speed than AI-focused Graphics Processing Units (GPUs) currently on the market, and its price-to-performance ratio is up to 2.4 times better.


'World's First LPU AI Chip' Achieved by Korean Startup: "2.4 Times Better Performance Than Existing GPUs" [Future of K-Semiconductors ①] The LPU developed by HyperAccel, scheduled for release this March, and its proprietary full-stack software architecture applied to it. HyperAccel
'World's First LPU AI Chip' Achieved by Korean Startup: "2.4 Times Better Performance Than Existing GPUs" [Future of K-Semiconductors ①] Kim Jooyoung, CEO of HyperAccel, is being interviewed on the 3rd at HyperAccel in Seocho-gu, Seoul. Photo by Kang Jinhyung

HyperAccel has also introduced innovation in its internal architecture. CEO Kim explained, "While traditional GPUs and NPUs are architectures composed of thousands of small cores, the LPU consists of several dozen large cores. That is the key difference." He continued, "When there are too many cores, data must shuttle back and forth multiple times, which can reduce bandwidth (the maximum amount of data that can be transmitted per hour). By increasing the size of each core and reducing their number, we designed an 'end-to-end' structure that allows data to flow through in a single pass, making it possible to preserve nearly 90% of the bandwidth."


The AI market is already considered to have entered a period of upheaval since last year. The structure in which Nvidia exclusively supplied GPUs and dominated profits began to show signs of disruption with the emergence of Google's Tensor Processing Unit (TPU). The TPU is an application-specific integrated circuit (ASIC) developed by Google to accelerate deep learning computations. CEO Kim commented, "A GPU, based on the Blackwell architecture, consumes far more power than an entire building. As the market seeks more economical alternatives with greater energy efficiency, the TPU has drawn significant attention." In this context, he emphasized that HyperAccel's LPU, which is designed to be affordable and minimize power consumption, "holds significant potential."


Looking beyond the LPU, CEO Kim stressed that for South Korea to establish a leading position in AI, cultural changes such as 'open collaboration' among startups and 'social inclusiveness' for talented individuals are also necessary. He said, "The AI market is changing so rapidly that it's difficult to keep up alone. In the United States, startups routinely share what they need and collaborate to develop products-open collaboration is the norm. That's their secret to keeping up with the pace of the market." CEO Kim also noted, "Due to trade conflicts between the United States and China, AI companies now face trade regulations that require them to report developments to the US and other countries after product development. There are many regulations, but if they can be reduced, our semiconductors could attract even more attention in the coming era of inference."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top