본문 바로가기
bar_progress

Text Size

Close

"More Important Than AI Commercialization"... Google's Expectations for Alphaevolve [Tech Talk]

Alphaevolve: The Culmination of Mathematical AI
AI Training Algorithms That Improve Themselves
Potential to Become the Key to Computing Cost Competition

Google led the early development of artificial intelligence (AI) through DeepMind, the UK-based AI company it acquired in 2014. The company invented foundational technologies for generative AI, such as the attention mechanism and transformers. However, Google was once seen as lagging behind competitors like OpenAI in commercializing these technologies, leading to concerns about its future. Google DeepMind, in particular, focused more on pure scientific research, such as AI for solving mathematical problems and predicting protein structures, rather than on immediately profitable AI applications.


Now, Google, which prioritized scientific research over short-term profits, may finally be reaping the rewards. The recently unveiled 'algorithm-discovering AI' called Alphaevolve is the culmination of DeepMind's mathematical AI research and holds the potential to address the most pressing challenge in the current AI industry: the cost of computing resources.

Google, long interested in AI for solving mathematical problems, completes Alphaevolve
"More Important Than AI Commercialization"... Google's Expectations for Alphaevolve [Tech Talk] After the unveiling of AlphaFold, Sundar Pichai (left), CEO of Google, posted a photo taken with Demis Hassabis, founder of DeepMind. Screenshot from X by Sundar Pichai


On May 14 (local time), Google DeepMind unveiled Alphaevolve. Alphaevolve is an AI agent (assistant) that autonomously improves and verifies the performance of computer program algorithms.


When Google's generative AI, Gemini Flash, generates new algorithms, Gemini Pro tests these algorithms in a virtual environment and selects only the best ones to propose. This process reduces the risk of hallucinations?a chronic problem in AI?and allows human programmers to confidently apply AI-generated solutions.


Alphaevolve represents the culmination of DeepMind's AI mathematics research. Previously, DeepMind consistently invested in developing AlphaGeometry and AlphaTensor, which solve complex mathematical problems, and as a result, was able to complete Alphaevolve, which specializes in optimizing matrix multiplication.

The core of AI algorithms: optimizing matrix multiplication

"More Important Than AI Commercialization"... Google's Expectations for Alphaevolve [Tech Talk] Generally, multiplying a 4x5 matrix by a 5x5 matrix requires 100 operations, but with the discovery of Strassen's algorithm in 1969, this was reduced to 80 operations. AlphaiVolv succeeded in reducing it further to 76 operations. Photo by Google Capture

Matrix multiplication is at the heart of almost all algorithms, especially AI algorithms. Training generative AI involves breaking down astronomical amounts of matrix multiplication data and processing it with graphics processing units (GPUs). Therefore, discovering ways to perform matrix multiplication faster and more efficiently can fundamentally enhance AI performance.


Numerous mathematicians have studied faster methods for matrix multiplication, but human ingenuity has now reached its limits. In contrast, AI, which tirelessly repeats experiments and result verification, may devise breakthroughs that humans cannot. According to Google's announcement, Alphaevolve succeeded in reducing the number of operations required to multiply a 5x4 matrix by a 5x5 matrix from 80 to 76, a reduction of 4 operations. This improves upon the matrix multiplication algorithm that had remained stagnant for 56 years since the discovery of Strassen's algorithm in 1969.


Such algorithmic improvements are key to reducing computing resource costs, the most significant issue in the AI industry today. Reducing the number of matrix multiplication operations required for AI training also reduces the number of GPUs needed. For companies, this directly translates to lower operational burdens. DeepMind has already reported that, after testing an early version of Alphaevolve in Google's data centers, it succeeded in reducing total computing resource usage by 0.7%.

The significance of 0.7% efficiency: a breakthrough in the race for AI capital

While '0.7% of computing resource costs' may seem minor as a number, it represents a substantial sum from the perspective of AI investment. Google revealed that it was using 50,000 tensor processing units (TPUs) for AI training in 2023. A 0.7% reduction corresponds to 350 TPUs. The annual contract price for the latest TPU version, TPUv5p, is $25,752 per unit (about 35 million KRW) on Google Cloud. This means that even the test version of Alphaevolve has already saved at least $9 million (about 1.24 billion KRW) in costs.


"More Important Than AI Commercialization"... Google's Expectations for Alphaevolve [Tech Talk] Google's self-developed AI computer chip TPU series. The latest generation, TPUv5 series, recorded performance similar to Nvidia H100 in various benchmarks. Google

The efficiency gains from Alphaevolve's algorithm improvements are expected to increase over time. In the coming years, the number of chips used to train large AI models will reach hundreds of thousands, and saving even 3-4% of these would mean reducing the cost of one or two hyperscale data centers.


Google's AI rival, OpenAI, is also pursuing the 'Stargate' project to build a hyperscale data center equipped with millions of GPUs, with estimated costs of $500 billion (about 690 trillion KRW) over four years. While Google must continue to compete in terms of capital to avoid falling behind in the AI race, the advancement of Alphaevolve could allow it to achieve Stargate-level computing effects at a much lower cost.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top