[Asia Economy Reporter Kim Bong-su] Artificial intelligence (AI) is being actively utilized in vaccine and treatment development amid the COVID-19 pandemic. However, if biases intervene or dangerous assumptions are made, fatal outcomes can occur. In response, Korean researchers have presented the world's first guidelines to enhance the reliability of AI utilization through international joint research.
The Korea Advanced Institute of Science and Technology (KAIST) announced on the 15th that it developed the "Using Artificial Intelligence to Support Healthcare Decisions: A Guide for Society" through international collaborative research.
The global COVID-19 pandemic accelerated the rapid commercialization of AI technology. For example, the UK AI startup BenevolentAI shortened the typical eight-year period required to identify new disease treatments to just one week by leveraging AI technology.
As AI technology spreads across the economy, industry, society, and culture, it creates tremendous added value and convenience in daily life. However, concerns have also arisen that the rapid adoption of technology brings pitfalls such as data bias and misuse. In healthcare, the quality and verification of data supporting AI are directly linked to life. The validity and safety of AI technology must be prioritized above all.
This guide was created by KAIST’s 4th Industrial Revolution Policy Center (KPC4IR) based on the awareness that more people need to raise questions about the accountability of AI technology to ensure its reliability in healthcare. AI technology should not exacerbate existing inequalities due to data bias and must secure data accuracy to minimize errors in outcomes.
KPC4IR conducted international joint research over the past year with the Risk and Public Understanding of Science Research Group at the National University of Singapore and Sense About Science, a leading UK nonprofit science and technology organization.
The researchers included domestic and international cases applying AI technology in healthcare, such as improving the effectiveness of medical image analysis and diagnosis, disease prediction and clinical decision-making using big data, and shortening drug development time. They emphasized that if information is missing or excluded from training data, AI may exhibit bias, and using data for purposes other than originally intended can lead to misjudgments in variable relationships or even results.
For example, in Germany, AI was developed to detect skin lesions and diagnose the likelihood of cancer, and experiments compared its results with those of actual doctors. When the same lesion images were shown to AI and 58 dermatologists of various nationalities, AI identified suspicious lesions with 87% accuracy, surpassing the doctors’ 79% accuracy. This demonstrated that AI can assist doctors in the decision-making process when treating patients.
However, if AI is primarily trained on data collected from people with lighter skin tones, it is more likely to fail to properly diagnose lesions in patients with darker skin tones. AI is called "intelligent" because it does not merely search data but analyzes hidden patterns to extract meaningful information.
People tend to believe AI decision-making is cold and objective. However, since AI learns based on existing data, social biases, prejudices, and dangerous assumptions can lead to unexpected results.
The researchers included five criteria in the guide to assess fairness issues related to data quality and variables, focusing especially on reliability in healthcare, and to check the accuracy of the technology. When conducting research and development in healthcare involving AI that deals with human lives, it is necessary to verify ▲use of accurately sourced data ▲collection or selection of data appropriate for the intended purpose ▲precise mention of limitations and assumptions ▲disclosure of data bias ▲and appropriate testing in real environments.
Kim So-young, director of KPC4IR, said, "If questions verifying the robustness of AI technology in healthcare are actively discussed in our society, it will ultimately raise the capabilities of AI technology while establishing trustworthy standards." She added, "We expect this guide to play an important role in raising public understanding of AI technology and recognizing its limitations and areas for improvement."
This KPC4IR research is the world’s first case where international researchers spanning Europe and Asia have presented AI technology guidelines in the specific field of healthcare. Experts from the National University of Singapore, technology company Affinidi, Carlos III University of Madrid in Spain, Lloyd’s Register Foundation and Guy’s and St Thomas’ NHS Foundation Trust in the UK participated. Domestically, many industry, academia, and research stakeholders including Seoul Asan Medical Center, Bundang Seoul National University Hospital, KAIST AI Graduate School and Department of Bio and Brain Engineering, Science and Technology Policy Institute, Korea Information Society Development Institute, and AI solution company VUNO collaborated.
KPC4IR presented this research at the "2021 KDD International Workshop," held online from 10 a.m. on the 15th. Detailed information is available on the websites of KAIST 4th Industrial Revolution Policy Center or the National University of Singapore Risk and Public Understanding of Science Research Group.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


