본문 바로가기
bar_progress

Text Size

Close

Taiwan Foxconn "World's Largest Nvidia GB200 Chip Factory Under Construction in Mexico" (Comprehensive)

Taiwan's Foxconn (Hon Hai Precision Industry) announced on the 8th that it is building the world's largest factory in Mexico to manufacture NVIDIA's GB200 chip, a global leader in artificial intelligence (AI) chips.

Taiwan Foxconn "World's Largest Nvidia GB200 Chip Factory Under Construction in Mexico" (Comprehensive) [Image source=Reuters Yonhap News]

According to major foreign media, Benjamin Ting, Senior Vice President of Foxconn's Cloud Enterprise Solutions division, emphasized the partnership between Foxconn and NVIDIA at Foxconn's annual Tech Day event held in Taipei on the same day.


Vice President Ting explained, "We are building this factory to meet the enormous demand for NVIDIA's Blackwell platform," adding that inquiries about the Blackwell platform are pouring in. He said, "We are constructing the largest GB200 production facility on Earth," but also mentioned, "I don't think I can disclose the location right now." However, shortly thereafter, Foxconn Chairman Liu Yangwei announced that the factory is located in Mexico. He also emphasized that the Mexico factory is of a very massive scale.


The GB200 is a new AI chip produced by NVIDIA using the Blackwell architecture. Blackwell is the latest chip succeeding NVIDIA's existing AI chips such as the H100 and H200, based on the Hopper architecture, and will enter full-scale mass production starting in the fourth quarter. This product supports AI training and real-time large language model (LLM) inference for models scaling up to 10 trillion parameters. It is manufactured using the process of Taiwan's TSMC, the world's largest foundry (semiconductor contract manufacturer).


NVIDIA plans to offer the Blackwell GPU combined with 72 units and its own central processing units (CPUs) called Grace in 36 units as the ‘GB200 NVL72’ computing unit. In this case, NVIDIA previously explained that the ‘GB200’ will provide up to 30 times the performance improvement over the H100 in large language models (LLMs).


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top