본문 바로가기
bar_progress

Text Size

Close

[Reading Science] Large-Scale AI Tasks Now Possible on Home PCs

Professor Kim Minsu's KAIST Team Develops Technology to Eliminate Bottlenecks in Large-Scale GPU Data Processing

Domestic researchers have developed a technology that can resolve the computational bottleneck caused by the limitations of GPU memory size, which is widely used in artificial intelligence (AI) and other fields. It is expected that large-scale AI tasks can be performed even through GPUs of personal computers (PCs) used in homes or small businesses.


[Reading Science] Large-Scale AI Tasks Now Possible on Home PCs Professor Minsu Kim, Department of Computer Science, KAIST

According to KAIST (Korea Advanced Institute of Science and Technology, President Kwang-Hyung Lee) on the 11th, a research team led by Professor Minsu Kim from the Department of Computer Science recently developed a data processing technology called INFINEL that can rapidly transfer and store analysis results or AI-generated outputs, which produce large-scale data of several terabytes, from the GPU to the main memory.


Compared to CPUs, GPUs are more suitable for AI training, but unlike CPUs, they have very limited memory management capabilities. This causes difficulties in flexibly managing unpredictable large-scale data. For this reason, until now, it was impossible to perform highly complex graph super-parallel computations such as ‘triangle listing’ using GPUs.


Although this problem was limited in the past, the rapid increase in AI utilization has led to a growing construction and use of graph-structured data. When performing highly complex super-parallel computations on graph-structured data, the output results become very large, and it becomes difficult to predict the output size of each thread.


The ‘INFINEL’ technology developed by Professor Kim’s team is a ‘miracle cure’ to solve this problem. By using this technology, even if the GPU memory is full, super-parallel computations, output, and storage can continue. Professor Kim explained, "Even with a GPU in a PC with small memory size, it is possible to quickly perform high-difficulty computations that generate output data exceeding several terabytes." He added, "This research was promoted to identify and proactively prepare for problems that may arise as AI training expands."


Professor Kim’s team verified the performance of the INFINEL technology through various experimental environments and data sets. Compared to the conventional highest-performance dynamic memory manager technology, it showed about 55 times improvement in computational performance, and about 32 times improvement compared to the two-step technology that runs the kernel twice.


Professor Kim said, "In the era of generative AI and metaverse (extended virtual worlds), technologies that can quickly process large-scale output data from GPU computing are expected to become important, and INFINEL technology can play a part in that."


The research team introduced that since INFINEL technology shows consistent performance regardless of GPU memory size, it is also suitable for companies struggling with analysis under limited budgets.


This research was published on March 4 in the international academic journal ‘PPoPP’, with PhD candidate Sungwoo Park, a student of Professor Kim, as the first author; researcher Seyun Oh from Graphy, a graph deep-tech company founded by Professor Kim, as the second author; and Professor Kim as the corresponding author.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top