Kim Jooyoung, CEO of HyperExcel
Specialized Chip for Advanced LLM Operation
2.4 Times the Performance of AI GPUs
50% Faster Processing Speed
Primarily Used on Naver Cloud
On-Device Chip Co-Developed with LG Electronics
To Be
The headquarters of HyperExcel is located in Seocho-dong, Seocho-gu, Seoul. Upon entering the office of CEO Kim Jooyoung, a large whiteboard covering the entire right wall immediately catches the eye. The whiteboard is filled with various formulas, figures, and arithmetic symbols arranged in irregular patterns. HyperExcel employees use the whiteboards placed throughout the company to let their imaginations run free. CEO Kim explained, "I set up whiteboards everywhere, inspired by my time working at Microsoft from 2010 to 2019," adding, "I wanted to create an environment where team members can jot down and organize their ideas as soon as they come to mind."
Kim Jooyoung, CEO of HyperExcel, is being interviewed on the 3rd at HyperExcel in Seocho-gu, Seoul. Photo by Kang Jinhyung
HyperExcel and CEO Kim plan to put the blueprints written on those whiteboards into action this year. In March, the company will launch the world’s first Language Processing Unit (LPU), which it developed independently. This LPU is expected to be mainly used by Naver Cloud. At the end of the year, an on-device chip co-developed with LG Electronics will also be released. CEO Kim said, "If 2025 was a year of development, this year is a year of launches. For a startup, releasing two chips in one year is not easy, so it will be a meaningful year." Based on this, the company aims to increase sales and challenge itself to go public around 2028.
The industry is particularly paying attention to the LPU. It is an AI chip first conceived by HyperExcel in 2023. The LPU is a type of Neural Processing Unit (NPU) specialized in advancing the operation of Large Language Models (LLMs). This chip enhances the inference capabilities of AI.
Last November, HyperExcel completed all development and design work related to the LPU product and handed over the blueprints to Samsung Electronics Foundry for mass production. The 4-nanometer (nm; 1nm = one billionth of a meter) process is being used for manufacturing. HyperExcel’s LPU is evaluated to be 50% faster in processing speed and up to 2.4 times more cost-effective in terms of performance compared to AI Graphics Processing Units (GPUs) currently available on the market.
HyperExcel's LPU to be released this March and its proprietary full-stack software architecture applied. HyperExcel
Kim Jooyoung, CEO of HyperExcel, is being interviewed on the 3rd at HyperExcel in Seocho-gu, Seoul. Photo by Kang Jinhyung
HyperExcel has also introduced innovation in its internal architecture. CEO Kim explained, "While conventional GPUs and NPUs are architectures made up of thousands of small cores, the LPU is composed of several large cores rather than many small ones. When there are too many cores, the bandwidth (the maximum amount of data that can be transmitted per hour) can be reduced as data moves back and forth multiple times. By increasing the size of the cores and reducing their number, we designed an 'end-to-end' structure that allows data to flow in a single pass, thereby maintaining nearly 90% of the bandwidth for greater efficiency."
The AI market is already considered to have entered a period of upheaval since last year. The structure in which Nvidia exclusively supplied GPUs and took the lion’s share of profits began to show signs of disruption with the emergence of Google’s Tensor Processing Unit (TPU). The TPU is an application-specific integrated circuit (ASIC) developed by Google to accelerate deep learning computations. CEO Kim said, "GPUs, especially the Blackwell series, consume much more power than an entire building. As the market seeks more economical alternatives with higher energy efficiency, TPUs have attracted significant attention." He emphasized that HyperExcel’s LPU, which is designed to minimize both cost and power consumption, "has ample potential" in this context.
Looking beyond the LPU, CEO Kim stressed that for Korea to establish a leading position in AI, there must also be cultural changes such as 'open collaboration' among startups and 'social inclusiveness' for talented individuals. He stated, "The AI market changes so rapidly that it’s difficult to keep up alone. In the United States, startups routinely share what they need and collaborate to develop products. That’s how they keep pace with the market." CEO Kim also noted, "Due to trade conflicts between the United States and China, AI companies must now consider trade regulations, such as reporting related information to the US and other countries after developing new products. There are many different regulations, but if we can reduce them, Korean semiconductors will gain even more attention in the coming era of AI inference."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

