본문 바로가기
bar_progress

Text Size

Close

[The Editors' Verdict] Algorithms and Regulation Standing Over the News

[The Editors' Verdict] Algorithms and Regulation Standing Over the News Jae-Won Kang, Dean of the Graduate School of Communication and Information and the Graduate School of International Information Security at Dongguk University


SNS is a service that connects people while also providing personalized news through algorithms. If users are unaware that the news is recommended based on algorithms, they can fall into confirmation bias on their own. In fact, a study found that although Facebook users frequently use the news feed as a source of political news, the majority are unaware that the Facebook news feed is recommended by an algorithm. These users regard "viewing news on SNS" as a kind of scientific verification and believe that their thoughts are generated based on solid evidence (recommended news). In reality, the algorithm simply recommended news tailored to the individual user. Moreover, users believe that public opinion aligns with their own views and even that the majority supports them.


If algorithm-based news recommendations sometimes trigger users' confirmation bias, it could negatively affect the deliberative process where diverse opinions are exchanged and even impact political elections. This is especially concerning for younger generations, who primarily access political news through SNS like Facebook. Is informing them that the news is recommended by an algorithm the only way to reduce these negative effects?


[The Editors' Verdict] Algorithms and Regulation Standing Over the News


Now, the government must consider more proactive regulation of algorithms. An algorithm is a set of rules and principles for processing information. The person who designs the algorithm decides how to process the input information according to certain criteria to produce an output. Of course, human influence on algorithm design varies. When choosing criteria, optimization functions, or training data for learning, humans can have an impact. For example, biases of people may be embedded in the basic input data, the training information. This is why transparency, fairness, and accountability should be demanded from designers or operators to prevent human influence during the algorithm design stage.


What about after the algorithm design stage? What happens between the input and output of information? While processing training data, the machine learns. This is the process of machine learning. Furthermore, the learned results are applied or used in judgment (processing). This is the so-called deep learning process. In this case, the machine makes judgments based on a corrected new algorithm rather than the originally designed one. This means that machines, like humans, can have intelligence and autonomous thinking through algorithms.


If such intelligent algorithms cause problems, who should be held responsible? Regulation is an intentional attempt to restrict a series of private actions to achieve public goals and to hold those who do not comply accountable. But what if the subject of responsibility is not a person but a machine (or algorithm)?


[The Editors' Verdict] Algorithms and Regulation Standing Over the News


Fact-checking is the process of verifying factual claims through evidence. It is the task of sifting through a mass of news to identify fake news. Amidst a mix of real and fake, if one trusts the good intentions of news producers, fact-checking is the task of finding a few intentional fakes among many truths. If algorithms can be designed to distinguish truth from falsehood based on scientific verification, fact-checking could be mandated for SNS news recommendation algorithms. However, errors in which fake news is mistakenly judged as fact-checked true news are serious. Such errors can have more severe consequences than mistakenly judging true news as fake. If users accept the sanitized and even certified fake news as fact, the harm of fake news establishes the good of true news. Moreover, if the fake news is contaminated by biased perspectives, users are more likely to believe it due to confirmation bias. Errors that regulation might cause are also issues that must be considered in regulating algorithms.


Jae-won Kang, Dean, Graduate School of Media and Communication and Graduate School of International Information Security, Dongguk University


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top