Expansion of Teacher Question Writers and Verification of Their Expertise
Establishment of a New "Education Assessment and Test Development Support Center" and Introduction of an AI-Based Question Development Support System
The Ministry of Education analyzed that its failure to control the difficulty level of the English section of the 2026 College Scholastic Ability Test (CSAT) was due to "an excessively large number of questions being replaced in the test development process compared with other sections, which caused a chain of disruptions in subsequent procedures such as difficulty checks."
On February 11, the Ministry of Education released the results of a detailed investigation into the entire process of CSAT question writing and review conducted from December 10 to 23 last year. The investigation found that in the English section, 19 out of a total of 45 questions were replaced during the development process. This is a striking figure compared with the replacement of 1 question in Korean and 4 questions in mathematics. According to the ministry, in the process of excluding questions that had appeared in the private education sector, an excessively large number of questions ended up being replaced, which made the development schedule very tight and caused disruptions in procedures such as difficulty checks. It also concluded that the opinions of review committee members were not sufficiently reflected in this process.
The ministry also found problems in the selection of English question writers. While teachers account for 45% of question writers across all CSAT sections, the proportion is only 33% in English, which, it said, created limitations in reflecting test-takers' actual academic levels in the questions.
The Ministry of Education stated, "In particular, since English is graded on an absolute scale, setting an appropriate difficulty level is extremely important, and we therefore plan to increase the proportion of teacher question writers to around 50% going forward."
The ministry also concluded that verification of competence and expertise was inadequate in the selection process for CSAT question writers and reviewers. Starting with the 2025 CSAT, in order to ensure fairness, question writers and reviewers have been appointed by randomly drawing from the integrated CSAT personnel bank (personnel pool). However, the ministry explained that in this process, verification of "expertise" was insufficient. Going forward, while maintaining the random selection method for choosing question writers and reviewers, the ministry will conduct in-depth verification of their expertise by closely examining their track record in writing questions for the CSAT, mock CSATs, and academic achievement assessments, as well as their experience in authoring textbooks and EBS teaching materials.
To improve the procedures for checking CSAT difficulty levels, the ministry will also newly establish a "Section-Specific Question Review Committee." The plan is to integrate the existing processes for checking question errors and difficulty by section and by item into this committee so they can be managed more efficiently. In addition, the role of the "CSAT Question Review Committee," which mainly checks whether questions go beyond the official curriculum, will be expanded to incorporate far more input from frontline teachers and to add the function of checking difficulty levels.
In the mid to long term, the ministry will establish an "Education Assessment and Test Development Support Center" to create a stable environment for CSAT question development. Until now, question writers and reviewers have stayed together for about 40 days each year in privately rented accommodation facilities while writing the test. A ministry official said, "Because we have been renting private facilities every year, it has been difficult to create a stable environment for test development," adding, "In particular, due to security issues, there have been constraints on building an efficient support system, such as using AI." The ministry is pushing to establish the center by 2030 and plans to apply for a preliminary feasibility study within the second quarter of this year.
Going forward, the ministry also plans to prepare measures to use artificial intelligence (AI) technology in developing English questions. It intends to develop an AI-based English passage generation system and use it as the basis for writing questions, thereby shortening the time required for test development. In the future, AI will also be used for tasks such as predicting question difficulty and reviewing similar items.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


