본문 바로가기
bar_progress

Text Size

Close

[Q&A] Vice Minister Ryu Jemyung: "Additional Recruitment Not Intended to Favor Specific Companies"

LG AI, SK Telecom, and Upstage Elite Teams Advance to Second Stage
Objection Period Open for Ten Days; Additional Recruitment to Proceed Swiftly
"Visible Results Achieved in Short Time... Everyone Is a Winner"

The government has decided not to impose any restrictions on eligibility for the additional recruitment of elite teams for the Independent AI Foundation Model Project. As a result, Naver Cloud and NC AI elite teams, which were eliminated in the first evaluation, will be able to try again. Amid criticism that this move may favor certain companies, the government emphasized that it hopes as many companies as possible will use government resources to accelerate technology development.


[Q&A] Vice Minister Ryu Jemyung: "Additional Recruitment Not Intended to Favor Specific Companies" Ryu Jemyung, Second Vice Minister of the Ministry of Science and ICT, is announcing the first evaluation results of the "Independent AI Foundation Model Project" at the Government Seoul Office on the 15th. Photo by Noh Kyungjo


Ryu Jemyung, Second Vice Minister of the Ministry of Science and ICT, stated during a Q&A session after announcing the first evaluation results of the project at the Government Seoul Office on the 15th, "The goal is not to determine a final winner, but to raise the competitiveness of Korean AI companies to a global level."


The Ministry of Science and ICT announced on this day that out of a total of five elite teams, three-LG AI Research, SK Telecom, and Upstage-advanced to the second stage. The ministry also decided to select a fourth elite team through an additional recruitment process. Any domestic AI company, including the two teams eliminated this time, can participate without restriction.


Vice Minister Ryu explained, "The intent of this project was to design models ourselves, rather than simply using pre-trained models with existing weights. We hope that by maximizing the use of government resources, as many companies as possible will benefit and accelerate their technology development."


He added, "Domestic AI companies have already achieved visible results in a short period through this project. We are well aware of their intense efforts. Since everyone is a winner and a key player in AI, I ask for your generous encouragement."


The following is a Q&A with Vice Minister Ryu and other officials.


- What is the schedule for recruiting the additional team? Can companies that were previously eliminated apply?

▲We completed and announced the first evaluation as quickly as possible. An unexpected vacancy arose for the fourth spot, so we will conduct the additional recruitment by streamlining administrative procedures as much as possible. We want to give opportunities to companies that did not advance to the second stage, the ten consortia that did not participate in the first evaluation, and other capable companies.


- Amid controversy over "independence," Naver Cloud was eliminated. Please provide further technical details regarding encoder weights and related issues.

▲This is mentioned in the technical report released by Naver. The problematic encoder was evaluated from technical, policy, and ethical perspectives, and the application guidelines specified the basic requirements for an independent AI foundation model. It is true that Naver Cloud used open-source software without licensing issues. However, they should have filled the weights based on their own data and demonstrated and verified their experience. Ultimately, they failed to meet the requirement that an independent AI foundation model must be designed and trained from scratch under any circumstances. The evaluation committee also pointed out that Naver Cloud's technical approach did not fully meet the project's requirements.


- What is the roadmap going forward?

▲The three teams will be able to start the second stage immediately, while the additional team will go through the recruitment process. Companies can file objections regarding the results of the first evaluation. We provided guidance this morning and will accept objections for ten days before finalizing the results. We will try to minimize the gap between the three teams and the additional team. The total participation period and the total amount of GPU resources will be the same for all. The reason the three teams will not wait is that rented GPUs cannot be left idle.


- Is it permissible to use external encoders? How will the "second chance" process work?

▲Jung Haedong, IITP AI Program Manager: In this evaluation, since the encoder could not be updated with new weights, using external encoders and weights as-is was deemed not to qualify as an independent foundation model.

▲Kim Kyungman, Director of AI Policy at the Ministry of Science and ICT: We are considering how to fill the vacancy that arose in the first evaluation and are also contemplating new projects. At this stage, our focus is on filling the vacancy.


- How will the additional elite team catch up with the others?

▲We will provide the same government-supplied GPU resources, data, and total project period. The project end date may differ by about a month, between June and July next year.


- Was there originally a minimum passing score? Are there any penalties?

▲The purpose of this compressed, small-scale competition is not to select just two final companies, but to create a fiercely competitive environment that drives significant results in a short period. Even companies that do not compete directly may be motivated to catch up. Please view this as a positive opportunity for a second challenge.


- Will there be clearer guidelines for evaluating independence in the additional recruitment or next evaluation?

▲Globally, even frontier companies all use open-source software. Everyone uses transformers, which are fundamental to AI, and open-source is standard practice among global big tech companies. It should not be seen as a negative. Using open-source strategically is a global norm in the AI ecosystem. However, the intent of this project was to design models ourselves, rather than simply using pre-trained models with existing weights, which would be free-riding on others' experience. At a minimum, we wanted to do this ourselves. There is consensus that competitiveness can be built even when using open-source. Since the goal was to install and efficiently use a large amount of GPU resources in a short period, it was necessary to adjust the project plan, and approval for such changes was granted once. We communicated with participants during development, and evaluations were conducted in consultation with the elite teams to align as much as possible.


- There are criticisms that suddenly introducing a "second chance" is unfair and that additional selection may be wasteful.

▲Currently, two consortia are working on foundation model projects in specialized fields. We want to maximize the use of limited GPU resources and budget by enabling as many AI companies as possible to participate in any way. Much has been gained from this process, and we expect more in the future. This is not a rushed or preferential approach for any particular company. The achievements generated by companies should not become the property of any single company. They must be contributed as open-source. The government's goal is to ensure that as many companies as possible benefit from government resources and accelerate technology development. If there is no fourth participating company, the remaining resources will be allocated to the three teams first.


- Naver Cloud's lack of independence was an issue. Was the same standard applied to other elite teams?

▲The evaluation committee reviewed the technical reports of the other companies and concluded that they met all the requirements, including those related to weights. The evaluation committee took all controversial circumstances into account.

▲Director Kim: Regarding the issue of training data weights for Naver Cloud, experts commented on it, but for the other four teams, there were no such issues. There was some mention of Upstage's reference, but it was not considered a critical flaw that would determine the outcome. SK Telecom also received some minor comments, but these were not absolute evaluation criteria.


- Did Naver Cloud inquire in advance about the encoder issue? If so, how specific was the government's response?

▲There were no such inquiries. The application guidelines contained relevant examples, and information sessions were held for participants. After the controversy, Naver submitted a statement of explanation, but since the evaluation was ongoing, it was not considered. Naver explained that they have their own encoder and that the encoders in question accounted for only a small portion of the overall project.


- What are the evaluation criteria for the second stage?

▲Director Kim: There will be three criteria: benchmarking, expert evaluation, and real-user evaluation. Benchmarking assesses objective performance, expert evaluation looks at technical originality and the ability to prepare for the future, and user evaluation considers how useful the AI is in actual field applications. There will be no major changes to the overall framework. For the "from scratch" aspect, we will gather opinions and specify differentiated scoring.


- Compared to the global level, how far have we come?

▲We have stated abstractly that our goal is to reach 95% of the global standard. In reality, compared to top-tier and frontier-level AI, we are still behind. Through moving targeting, we will continue to catch up by comparing ourselves to the best-performing AI at each stage.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top