Elon Musk, CEO of Tesla, plans to expand the number of graphics processing units (GPUs) used in xAI's supercomputer 'Colossus' to around one million, according to major foreign media reports on the 4th (local time).
The Greater Memphis Chamber of Commerce (GMC) announced in a statement that the expansion of Colossus's system is underway, and to support this, GPU supplier Nvidia, along with Dell and Supermicro, which have been assembling computer server racks, will establish facilities in Memphis. The Chamber of Commerce also added that it will set up a special xAI operations team to provide 24-hour concierge services.
Colossus is a large-scale computing facility built by xAI to train AI models such as 'Grok' and is located in Memphis, Tennessee. Earlier, CEO Musk stated on X (formerly Twitter) last September, "Colossus, built in just 122 days, is the world's most powerful AI training system powered by 100,000 Nvidia H100 GPUs," and added, "We plan to double the system size within a few months by adding an additional 50,000 Nvidia H200 GPUs." If the number of GPUs installed in Colossus expands to one million, the system size will increase tenfold.
The development of Colossus is a crucial factor for CEO Musk to secure an advantage in the AI arms race. OpenAI, the developer of ChatGPT, has signed a partnership worth about $14 billion with Microsoft (MS), and Anthropic, which developed a cloud chatbot, has also attracted investments worth approximately $8 billion from Amazon.
Foreign media evaluated, "Instead of forming partnerships, Musk has built his own supercomputing capabilities solely through his power and influence in the technology sector," and "xAI has caught up with competitors in less than a year since its establishment." Currently, xAI's corporate value is estimated at $45 billion (about 64 trillion KRW), and it has recently received an additional $5 billion in funding.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.



