본문 바로가기
bar_progress

Text Size

Close

[AI Era, Changing Jobs] What Companies Fear Most Is 'AI Errors'

AI Error and Distortion Risks Are Top Concerns
Final Approver Seen as Key to Accountability
Organizational Responsibility Emphasized Over System
Ethical Standards Focus on Control Rather Than Technology
AI as a Tool, Human Judgment Remains Essential

As the adoption of artificial intelligence (AI) continues to spread, companies have identified 'AI errors' and the resulting issues of accountability as the most significant risk factors, rather than concerns about technological competitiveness. Despite improvements in work efficiency and productivity, organizations and businesses are most wary of the impact that errors or distortions generated by AI could have on them.


According to the results of the "Survey on Organizational Changes After AI Adoption," conducted by The Asia Business Daily from December 8 to December 19 last year, targeting personnel, strategy, organization, and planning department managers at Korea's top 100 companies, 26 out of 74 companies (35.1%) selected 'standards for preventing errors and distortions in AI outcomes' as the most important ethical standard for AI use, making it the most common response.

[AI Era, Changing Jobs] What Companies Fear Most Is 'AI Errors'

This was followed by 21 companies (28.4%) that cited 'standards for determining accountability when using AI,' and 15 companies (20.3%) that emphasized 'standards for recording and tracking the AI usage process.' Only 12 companies (16.2%) prioritized 'customer protection standards' above all else. These results indicate that, in situations where AI is used at customer touchpoints or in decision-making processes, companies are more concerned about the possibility of errors and the challenge of controlling their impact, rather than the use of the technology itself.


Concerns about AI errors are also evident in how companies determine accountability. When asked who should be held responsible if work-related risks arise due to AI, 25 companies (33.8%) answered that the 'final approver of the relevant work' should be held accountable. Another 19 companies (25.7%) said that 'executives or the organization as a whole' should bear responsibility.


In contrast, only 16 companies (21.6%) viewed the 'department operating and managing the AI system' as the accountable party, and just 14 companies (18.9%) pointed to the 'person in charge of the actual work.' This shows a prevailing perception that, even if AI performs certain tasks, responsibility for the outcomes should rest with the final decision-maker and the organization, rather than with the system or individual employees.


These responses indicate that companies are utilizing AI more as a tool to assist human judgment, rather than accepting it as an independent decision-making entity. Underlying this is the belief that, unless a structure is maintained in which humans review and approve AI-generated results rather than simply following them, the risks the organization must bear could increase significantly.


Companies participating in the survey reported that as AI adoption becomes more widespread, managing errors and establishing accountability structures are emerging as more important challenges than technological performance. This is because, as the scope of AI use expands, even a single minor error can lead to issues in decision-making, customer trust, or legal disputes.

[AI Era, Changing Jobs] What Companies Fear Most Is 'AI Errors'


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top