본문 바로가기
bar_progress

Text Size

Close

[Chip Talk] How Did Nvidia Establish a GPU Monopoly Structure?

Unlike Intel and AMD, which fiercely compete in the central processing unit (CPU) market, the graphics processing unit (GPU) market is a monopoly structure with Nvidia occupying more than 80%. The intensified CPU competition between Intel and AMD acts as an instability factor in the CPU market, but the GPU market continues rapid growth based on Nvidia's long-standing monopoly structure.


How has Nvidia been able to play a monopolistic role in the GPU ecosystem? When CEO Jensen Huang, along with Chris Malachowsky and Curtis Priem, founded Nvidia in 1993, CPUs were the only ones receiving attention. Most PC manufacturers were using chips made by Intel or AMD at that time. The computer graphics field, which Nvidia chose as its main business, was a niche market only of interest to video game and computer game companies.


Chris Miller, a professor at Tufts University, identified the core reason Nvidia has maintained a long-term monopoly in the GPU market in his book "Chip War" as the software CUDA.


Starting as a startup in a restaurant in San Jose, Silicon Valley, Nvidia did not stop at developing GPU processors but also worked to create a graphics-related software ecosystem. It is estimated that CEO Huang invested over $10 billion in the CUDA project. CUDA allows parallel processing algorithms performed on GPUs to be created using industry-standard languages. However, using this architecture requires Nvidia GPUs and special stream processing drivers.

[Chip Talk] How Did Nvidia Establish a GPU Monopoly Structure?

By enabling high-speed parallel computing in various fields beyond computer graphics using standard programming languages, Nvidia made CUDA freely available not only to graphics experts but to all programmers. However, by making this software operate only on Nvidia chips, Nvidia gained opportunities to expand into new markets as CUDA usage broadened. This is also the background for Nvidia's customer base expanding beyond game companies to AI, data science, autonomous driving, robotics, and more. Currently, the largest demand for high-speed parallel computing is in AI. While the CPU market sees intensified competition between Intel and AMD, the dependency effect of the CUDA framework continues in the GPU market, maintaining Nvidia's long-term monopoly structure.


Nvidia is currently facing an environment where GPU demand can further expand. Accelerated computing based on GPUs or hardware accelerators offers high efficiency and is specialized for AI and big data. As competition among companies to integrate generative AI intensifies, data center infrastructure is gradually shifting from traditional CPU-based general computing to accelerated computing using GPUs.


Meanwhile, the trickle-down effect spreading from Nvidia's monopolistic ecosystem to the semiconductor industry is welcome news for Korean semiconductor companies. Analyst No Geun-chang of Hyundai Motor Securities stated that if Nvidia supplies about 30,000 GPUs to one supercomputer site, sales of approximately $600 million to $1.2 billion per site can be generated, and the demand for high-bandwidth memory (HBM) is expected to be around 23 million to 45 million units. In the current severe semiconductor demand downturn, the expansion of HBM demand driven by Nvidia can play a significant role as a benefactor for the semiconductor industry.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top