본문 바로가기
bar_progress

Text Size

Close

Fierce Performance Competition in the AI Industry... Spotlight on 'ML Puff' [Tech Talk by Im Ju-hyung]

Over 20 AI Semiconductor Companies Compete
Performance Data Varies... Calls for a Fair 'Competition Arena'
'ML Commons' Launched Through Industry-Academia Collaboration Leads to 'ML Puff'
Benchmark Tests on AI Model Training Speed, Power Consumption
Positive Impact on Technological Competition and Industry Diversity

Fierce Performance Competition in the AI Industry... Spotlight on 'ML Puff' [Tech Talk by Im Ju-hyung] There are various types of AI semiconductors, each with distinct characteristics. The photo shows the NVIDIA DGX A100 GPU (top) and the Graphcore IPU M-2000 AI system. Photo by NVIDIA and Graphcore official websites.


[Asia Economy Reporter Lim Juhyung] As the Fourth Industrial Revolution accelerates, the importance of artificial intelligence (AI) continues to grow. Not only global IT giants like Google and Amazon but also leading domestic companies such as Naver and Kakao are making every effort to build the best AI neural network models.


At times like these, AI semiconductors specialized in AI algorithm processing are gaining attention. Currently, more than 20 companies worldwide claim to develop AI semiconductors boasting top performance. The problem is that there is no standard to determine which of these chips is truly the best. To address this, the 'fair AI competition platform' called 'ML Commons' was established, and the event it hosts is 'ML Perf.'


The Fair AI Competition Platform 'ML Perf'


The history of ML Commons is very short. It was founded on December 4 last year through the collaboration of global tech companies such as Alibaba, Facebook, Google, Intel, and NVIDIA, as well as research institutions including the University of California, Berkeley, Stanford University, and the University of Toronto in Canada.


The purpose of ML Commons is to accelerate innovation in machine learning (ML) while increasing technology accessibility for the public good. To this end, ML Commons develops and provides the latest AI datasets, models, best practices, and benchmarks to various companies.


Among the programs managed by ML Commons, the most notable is undoubtedly 'ML Perf.' ML Perf is a benchmark test that fairly compares the efficiency and training speed of AI systems. Through ML Perf, it is possible to identify which among dozens of AI semiconductors has the best performance.


Fierce Performance Competition in the AI Industry... Spotlight on 'ML Puff' [Tech Talk by Im Ju-hyung] 'ML Commons,' an industry-academia cooperation organization established to promote fair competition and development in artificial intelligence technology / Photo by ML Commons official website


The Increasing Variety of AI Semiconductors... The Need for Neutral Information


Why is a transparent and neutral benchmark test like ML Perf necessary? Today, it is estimated that more than 20 companies are intensively developing AI semiconductors. Well-known tech companies such as NVIDIA, Google, AMD, Intel, and ARM produce AI computer chips, but there are also new startups like Graphcore, SambaNova, and Cerebras that have recently entered the field.


The semiconductors they produce each have unique characteristics. NVIDIA’s Graphics Processing Unit (GPU), Google’s Tensor Processing Unit (TPU), and Graphcore’s Intelligence Processing Unit (IPU) are representative examples.


The problem is that there has been no platform to fairly compare the actual performance of these computer chips. Semiconductor companies post their own 'benchmark results' on their official websites, but these results are mostly set under conditions very favorable to the companies conducting the tests. For consumers purchasing AI semiconductors, this information is hard to trust at face value.


ML Commons created ML Perf to provide the most transparent information to semiconductor buyers and to motivate semiconductor developers to compete.


Promoting Competition and Contributing to Industry Diversification


The way ML Perf operates is simple. The organizers prepare representative model examples used in the actual AI industry. Participating companies submit detailed information about their computer equipment along with records of the time taken to process the model and the power consumed. This benchmark allows for a clearer understanding of the characteristics of each AI computer system.


ML Perf is now in its second year. Although it is not very old in terms of years, it has already become a fiercely competitive platform where most of the well-known companies in the AI industry participate.


Fierce Performance Competition in the AI Industry... Spotlight on 'ML Puff' [Tech Talk by Im Ju-hyung] The results of the ML Puff benchmark hosted by ML Commons are freely accessible to everyone.
Photo by ML Commons Official Website


Thanks to ML Perf, semiconductor buyers can clearly understand which system delivers superior performance. However, the positive effects of ML Perf do not end there. Since each semiconductor has different characteristics, the most suitable semiconductor varies depending on the type of model to be processed.


For example, in this ML Perf, NVIDIA’s A100 GPU showed overwhelmingly superior performance overall. In particular, the GPU achieved the world’s top score in 'BERT,' a type of natural language processing model, which has also been used in Naver’s chatbot AI 'Clova.'


However, the A100 was not the best in every field. In the 'RESNET-50' model used for computer vision technology that detects objects through sensors, Graphcore narrowly surpassed NVIDIA.


In other words, ML Perf clearly shows which system is most suitable for running specific AI models, thereby not only promoting fair competition among AI semiconductors but also positively influencing the diversification of the industry.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top