Google, Meta, Intel Develop Proprietary AI Chips
Beyond Smart AI, Competing in Sustainable AI 'Round 2'
Global big tech companies, which have been competing with smarter artificial intelligence (AI) models, have now entered the AI chip war. This is a declaration that they will no longer be led by the absolute market leader, Nvidia. It also reflects the judgment that relying on Nvidia's strategy alone cannot properly respond to the rapidly expanding AI market.
This week, Intel, Google, and Meta all unveiled new AI chip products almost simultaneously. Let's start with Intel. Intel's new weapon is 'Gaudi3.' Intel emphasized that Gaudi3 can train large language models (LLMs) 50% faster than Nvidia's latest graphics processing unit (GPU), the 'H100,' and has more than twice the power efficiency.
Another feature is support for open software (SW), allowing flexible scaling of AI models and services. This grants autonomy in software development so that Gaudi can be utilized according to each customer's environment. This contrasts with Nvidia's approach of dominating the market by bundling AI chips with its proprietary software platform, 'CUDA.'
While Intel has started to shake Nvidia's dominance, Google has challenged Intel with its own central processing unit (CPU). Google Cloud recently unveiled 'Axion,' a server CPU developed in collaboration with semiconductor design company ARM. This is Google's first server CPU release. According to Google, Axion offers 50% better performance and 60% higher energy efficiency than Intel's CPUs, which have long led the server CPU market.
Meta has also stepped up. On the 10th (local time), Meta announced its next-generation AI chip, 'MTIA (Meta Training and Inference Accelerator),' via a blog post. This is a successor to the AI chip 'v1' unveiled in May. Alongside this, Meta revealed a software ecosystem that can compete with Nvidia's CUDA. MTIA is designed to efficiently run Meta's ad recommendation and ranking algorithms. Ultimately, the goal is to create a chip that can be used to train AI models like Meta's own LLM, 'LLaMA.'
Microsoft (MS), Amazon (AWS), and OpenAI are no exceptions. MS introduced its own AI chip, 'Myia100,' at the end of last year, and AWS launched 'Trainium2.' OpenAI is also raising large-scale investments to build its own AI chips.
As big tech companies that focused on AI models or services extend their reach into independent chip development, the AI competition has entered a new phase. The battle has shifted from who can produce smarter AI to who can establish sustainable rules for the competition.
Both AI development companies and the semiconductor industry agree that the AI industry is not sustainable under Nvidia's sole dominance. Nvidia chips are too expensive and production volumes are insufficient. Even if prices and supply are adjusted realistically, limitations remain. As AI services expand, demand for inference semiconductors is increasing, but Nvidia chips are not the perfect fit for this demand. Regardless of the type of chip developed, the higher the dependence on Nvidia, the more AI evolution can be hindered. This is because developers must create technologies suited to the chip rather than software optimized for their own models or services. It will be interesting to see how the movement to break free from the 'insurmountable wall' of Nvidia's grip will affect the AI and semiconductor market landscape.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![Beyond Performance, Efficiency Competition... Big Tech's 'Chip Independence Declaration' [AI Bite News]](https://cphoto.asiae.co.kr/listimglink/1/2024041211214937525_1712888510.jpg)

