본문 바로가기
bar_progress

Text Size

Close

KAIST Develops World's Top Computational Framework Without Reliance on NVIDIA

A computational framework that allows a single GPU computer to process calculations, which would otherwise require 25 computers working for 2,000 seconds, in about half the time has been developed in South Korea. Notably, this computational framework also features technology that enables memory management without relying on NVIDIA.


KAIST announced on May 27 that the research team led by Professor Minsu Kim from the School of Computing has developed a general computation framework (named "GFlux") that can rapidly process graph computations involving one trillion edges using a GPU equipped with limited memory capacity.


KAIST Develops World's Top Computational Framework Without Reliance on NVIDIA (From left) Heeyong Yoon PhD candidate, Donghyung Han PhD candidate, Seyun Oh PhD candidate, Minsu Kim Professor. Provided by KAIST

GFlux divides graph computations into "GTask," a unit task optimized for GPU processing, and its core technology is a specialized scheduling technique that efficiently allocates and processes these tasks on the GPU.


The framework converts graphs into HGF, a proprietary compressed format optimized for GPU processing, and stores and manages them on storage devices such as SSDs.


In practice, the research team was able to reduce the size of a one-trillion-edge graph?which previously required 9 terabytes (TB) when stored in the standard CSR format?to approximately half that size (about 4.6TB) using the HGF format.


Additionally, the team utilized a 3-byte (B) address system for the first time on GPUs, which had not been used previously due to memory alignment issues, thereby reducing GPU memory usage by about 25%.


The research team also validated the performance of GFlux technology through complex graph computations such as triangle counting. Triangle counting is an operation that identifies and counts all triangular relationships formed by three interconnected vertices in a graph, and is widely used in data analysis and artificial intelligence.


The validation was conducted on a graph with 70 billion edges. As a result, GFlux succeeded in completing the triangle counting operation in just 1,184 seconds using only a single computer equipped with a GPU.


This represents the largest graph ever successfully processed for triangle counting by a single computer. The previous best-performing technology required 25 computers connected via a high-speed network to process the triangle counting operation in 2,000 seconds.


The research team highlights another strength of GFlux: it does not rely at all on NVIDIA's CUDA Unified Memory. CUDA is a software stack that enables the use of NVIDIA's GPGPU, and it only operates on NVIDIA GPUs equipped with CUDA cores.


The team also emphasized that GFlux incorporates a dedicated memory management technology for GTask, which integratively manages main memory and GPU memory to prevent computational failures due to memory shortages. This is a core technology of the GFlux framework.


Professor Minsu Kim stated, "Recently, in the field of artificial intelligence (AI), there has been a surge in cases where knowledge bases or databases are stored and utilized as graphs. However, in general, graph computations have been limited to relatively simple operations due to the constraints of GPU memory. GFlux technology is expected to contribute to effectively solving these problems."


This research was supported by the IITP SW StarLab program of the Ministry of Science and ICT and the mid-level project of the National Research Foundation of Korea.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top