본문 바로가기
bar_progress

Text Size

Close

[THE VIEW]Who Makes the Decisions in the Age of AI?

The Question of "Vanished Responsibility" Before Automation

[THE VIEW]Who Makes the Decisions in the Age of AI?

The first thing to disappear in organizations after the introduction of artificial intelligence (AI) was not repetitive work. What vanished was the question, "Who made this decision?" Increasingly, in meeting rooms, the phrase "AI said so" is being heard. While this sentence may sound like the language of efficiency, in reality, it signals the blurring of boundaries between responsibility and authority.


This distinction is becoming increasingly clear in American workplaces. Even when using the same generative AI, the same recommendation systems, and the same automation tools, some organizations achieve results while others only experience confusion and distrust. Interestingly, this difference does not stem from the level of technology or the amount of data. The core issue is where AI is positioned within the organization-in other words, a matter of governance.


Organizations that achieve results clearly define AI as an 'auxiliary tool.' AI analyzes and suggests, but it does not decide. At the final stage of decision-making, there is always a human, and the attribution of responsibility is clear. In such organizations, "AI said so" is never the final justification. Instead, that phrase becomes the starting point for inquiry. They ask why such a recommendation was made, what assumptions underlie it, and whether those assumptions are still valid in the current situation.


On the other hand, in organizations that rely on AI, boundaries become blurred. Initially introduced for efficiency, AI at some point is elevated to the basis for judgment. Managers justify decisions by presenting AI outcomes as 'objective judgments,' making it difficult for members to challenge those results. While AI appears to be a neutral tool, in reality, it becomes a mechanism for redistributing organizational power.


In this process, one crucial question is often omitted: Who configures the AI, who interprets its results, and who bears responsibility? Depending on who adjusts the AI model's parameters, who defines the data scope, and who summarizes the results in reports, the same AI can create entirely different organizational cultures. If AI is monopolized by managers, control is strengthened; if it is open to frontline employees, experimentation and learning increase. The technology may be the same, but outcomes can diverge dramatically depending on organizational design.


Another recent trend in the United States is the increase in AI verification work. As AI writes reports, summarizes contracts, and automates customer service, the tasks of checking and revising these outputs do not disappear; in fact, they increase. The problem is that responsibility for this verification is often not clearly defined. While companies claim to have reduced costs through AI, in reality, the invisible labor of verification and correction is shifted onto individuals. This is why there is a disconnect between productivity statistics and what people actually experience.

[THE VIEW]Who Makes the Decisions in the Age of AI? The sentence "AI said so" is being wrapped in efficiency, blurring the boundaries of responsibility and authority in human decision-making in real time. Google Gemini generated image.

For this reason, some American companies are redefining what constitutes successful AI utilization. Now, "How actively did you use AI?" is no longer a good evaluation criterion. Instead, "How much did you trust AI, and where did you draw the line?" has become more important. Those who recognize and supplement AI's limitations, rather than simply following its suggestions, are rated more highly. The ability to use AI is now being redefined as judgment and a sense of responsibility.


This issue is also coming to the fore in the regulatory environment. The U.S. Federal Trade Commission and the Consumer Financial Protection Bureau are beginning to view automated decision-making with unclear responsibility structures as a risk factor, rather than simply questioning the accuracy of algorithms. The reasoning is that a system where it is unclear who is responsible for the outcome is more dangerous than a system where the outcome itself cannot be explained.


This issue is by no means irrelevant for Korean companies and organizations. In a culture that values rapid decision-making and efficiency, AI appears attractive as a tool for skipping intermediate steps. However, this also increases the risk that AI's judgments will easily become the highest standard. Especially in areas such as evaluation, personnel, and performance management, the authority of AI can quickly become institutionalized.


Ultimately, organizational competitiveness in the AI era is determined not by the speed of technology adoption, but by the ability to set boundaries. Only organizations that distinguish what AI can and cannot do, design which areas to automate and which to leave to human judgment, and, above all, clearly define where responsibility lies in the event of failure, will truly 'use' AI.


AI does not govern organizations on its own. It merely quietly reveals organizations that are willing to delegate judgment and evade responsibility. As more organizations adopt AI, true differentiation will begin not with technology, but with the questions they ask.


Son Yoonseok, Professor at the University of Notre Dame


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top