본문 바로가기
bar_progress

Text Size

Close

85% of AI-Driven Projects Fail: Lessons from the "AI Mistake Note"

"Everyone Wants 'AI Innovation'... But the Result Is Failure"
The Top Reason: Leadership That Issues Orders Without Understanding
Data Bias and Poor IT Infrastructure Are Also Major Causes

Editor's NoteExamining failures is the shortcut to success. "AI Mistake Note" explores cases of failure involving AI-related products, services, companies, and individuals.

If you are planning to start something new with artificial intelligence (AI), you should set "failure" as your default expectation. This is not because of a lack of development time, effort, or capital. The Japanese government, which invested four years in developing an "AI for detecting child abuse," recently decided to halt the project. There were simply too many judgment errors.


According to market research firm Gartner, 85% of AI projects have failed. This is more than twice the failure rate of general IT projects. The successful AI services, products, and solutions we often see are, in fact, born out of the graveyard of failures.


So why do AI projects fail so often?


Leadership That Issues Orders Without Understanding
85% of AI-Driven Projects Fail: Lessons from the "AI Mistake Note" A meme depicting a situation where someone says, "Try to sell me this pen," and the response is, "This works with AI." It satirizes the reality of the term "AI" being overused as a marketing tool. It is a parody of a scene from the movie "The Wolf of Wall Street" and has gained significant popularity in English-speaking communities such as Reddit. Screenshot from Reddit

According to a study by the RAND Corporation, a U.S. think tank, based on interviews with 65 AI experts, the most common reason for AI project failure was "leadership."


Company executives often fail to accurately identify or communicate the "real problem" that needs to be solved, leading technical and operational teams to optimize the wrong metrics or focus on low-value areas.


At one e-commerce company that entered emergency management mode, executives identified AI as the solution for business innovation. The technical team delivered "a dramatic increase in sales through AI." At first glance, this seemed like a success.


However, because this model focused solely on sales volume, it neglected key indicators such as operating profit per product, inventory management, and average order value. The executives eventually realized that what they actually needed was "an AI model that delivers the highest profit," not just increased sales volume. But this realization came only after six months had already been spent on time, money, and manpower.


The RAND Corporation emphasized, "The most common reason for AI project failure is misunderstanding and lack of communication about the project's intent and purpose," adding, "Leaders must ensure that technical, development, and operational staff clearly understand the project's objectives and context."


When starting a project, it is essential to clarify "what this project is for." Failure did not occur because there was no one-of-a-kind idea or astronomical investment.


Failure Due to Data Bias and Quality Issues
85% of AI-Driven Projects Fail: Lessons from the "AI Mistake Note" Image depicting data distribution. Photo by Getty Images Bank

Child welfare workers in Japan are chronically overworked. The Japanese Children and Families Agency sought to help them using AI. By inputting various data such as the child's physical condition and the guardian's behavior, the AI was supposed to determine the likelihood of abuse. However, things did not go as planned.


Instead of reducing the heavy workload of child welfare workers, it actually became a greater source of stress. This was because there were too many incorrect judgments. In situations where physical abuse was obvious to anyone, the AI would sometimes determine "low likelihood of abuse," while in other cases, it would wrongly accuse innocent parents as child abusers.


This was due to a lack of sufficient and representative training data, as well as data bias. In particular, psychological abuse, which does not leave visible injuries, was difficult for the AI to detect.


It is crucial to secure sufficiently representative and diverse data, and to conduct data cleansing and augmentation in advance if necessary. If collecting data is challenging, it is important to either narrow the project scope or conduct preliminary work such as pilot data collection to gather the necessary data.


'Isolated Island'... AI Left Alone
85% of AI-Driven Projects Fail: Lessons from the "AI Mistake Note" AI must be organically connected to numerous networks to function properly. Photo by Getty Images Bank

Just as important as the data itself is the infrastructure that surrounds it. Even if you discover an oil field, it is useless if you lack the infrastructure to extract and transport the oil.


There are cases where excellent AI services or models are developed, but the technology fails to lead to real innovation. According to Gartner, only about 54% of AI models initially developed are actually deployed in the field. In other words, half of these models never leave the lab despite the effort invested in their development.


One manufacturer developed an AI model to predict factory equipment failures. The model demonstrated a high error detection rate. However, it was never deployed on site. This was because predicting failures required real-time data from various machines distributed throughout the factory, but the necessary Internet of Things (IoT) infrastructure for data collection had not been considered.


When a painstakingly developed model is not applied in the field and is left unused, it is a waste of both time and resources. On-site teams lose confidence in AI, while executives, unable to see results proportional to their investment, begin to distrust AI itself. This leads to fewer opportunities for new AI projects and budget allocations in the future, resulting in long-term losses.


Just as a car requires not only an engine but also wheels and a chassis to run, AI projects must consider deployment and operation from the initial planning stage. Budgets and schedules should account for infrastructure building and system integration, and organizations should ensure collaboration between development and IT operations teams.


Additionally, even after launching an AI model, it is necessary to establish a system for monitoring and improving its performance. Only in this way can AI models move beyond laboratory experiments and become embedded in real-world business processes.


"Just to Add a Line to My Resume"... Preference for the Latest Technology
85% of AI-Driven Projects Fail: Lessons from the "AI Mistake Note" The Tesla factory located in Austin, Texas, USA. Photo by AFP

In 2017, Tesla decided to maximize automation in the production process of its Model 3 vehicles. However, collaboration between automation robots did not proceed as expected. Frequent system errors significantly slowed down production speed.


As a result, CEO Elon Musk admitted, "We over-relied on automation. We underestimated human abilities," and had to revert some processes back to human labor.


When adopting the latest technology itself becomes the goal, the real problems that need to be solved are often overlooked. This is especially common in technical roles. Rather than choosing the right tools to achieve objectives, some prioritize using the latest technology to enhance their resumes.


While experimenting with new technology is not always inefficient, increasing complexity without regard to the project's purpose raises the risk of failure. A practical approach is to verify the effectiveness of AI adoption through small-scale pilot tests and expand gradually.


Considering the Limitations of Both Technology and Systems
85% of AI-Driven Projects Fail: Lessons from the "AI Mistake Note"

AI is not a panacea that can solve every problem. While it excels at prediction, classification, and pattern recognition, it has weaknesses in subjective interpretation and handling rare cases.


Blindly applying AI while ignoring technical and situational limitations can lead to results that fall far short of expectations. It is important to clearly recognize the technical limitations of AI and focus on problems that can actually be solved.


For example, in sensitive fields such as biotechnology, AI models must be validated even more thoroughly, and collaboration with experts should be strengthened. Furthermore, before fully introducing AI, it is necessary to conduct pilot projects to verify effectiveness and then expand gradually.


In summary, the high failure rate of AI projects is not simply due to technical problems. It is the result of a combination of factors: incorrect problem definition, low-quality data, a simple preference for the latest technology, lack of comprehensive infrastructure, and failure to consider the limitations of AI technology.


For a successful AI project, a comprehensive approach is essential: clearly defining business objectives, managing data quality, and considering the operational environment. Above all, it is important to focus not on the technology itself, but on solving the actual problem.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


Join us on social!

Top