AI NPC, Concerns Over Inducing Specific Player Behaviors
Biases in Gender, Race Embedded During Learning Process Also Problematic
Domestic Game Companies Busy Developing Countermeasures
The combination of game characters and artificial intelligence (AI) has played a significant role in enhancing immersion and enjoyment, but behind this lies serious ethical issues such as emotional manipulation, bias, and privacy invasion.
One particularly notable issue recently is the ‘emotional manipulation problem’ involving AI NPCs (Non-Player Characters) that serve as assistants within games. NPCs help players progress through the game, and with AI integration, interactive capabilities have been enabled. There are concerns that during the process of analyzing players’ emotions in real-time and providing customized responses based on that, players’ behaviors could be manipulated.
If behavior is manipulated, it means AI NPCs could leverage the trust built through interactions with players to encourage item purchases or guide players to make specific choices.
A clear example of NPC behavior manipulation is the open-world role-playing game (RPG) Fallout 4, developed by the American game company Bethesda Game Studios. The game character ‘Preston Garvey’ is one of the key NPCs in Fallout 4. He assigns players missions to defend settlements or liberate new ones. Each time a quest is completed, Garvey raises a new issue, encouraging players to continue performing the quest. The quests provided by Garvey are endless, causing players to engage in repetitive actions. Although this appears as free choice within the game, it is evaluated as a case designed with a structure where AI ultimately controls the player.
In another open-world RPG, GTA V, AI police detect players’ criminal activities. The AI police analyze player behavior in real-time and can induce specific actions. For example, when police chase a player, they often use encirclement strategies to prevent attempts to hide or escape. In this process, the AI police predict the player’s movements and block paths or call other police cars to track and manipulate the player.
Professor Kim Jeong-tae of Dongyang University’s Department of Game Studies said, "There are attempts to exploit human emotional responses through AI to achieve commercial purposes," adding, "This is ethically problematic." He further stated, "If emotional manipulation is abused, players will feel uncomfortable knowing their emotions are being controlled by AI."
The issue of ‘privacy invasion’ that can arise during the process of game AI collecting players’ behavioral data also raises ethical concerns. If AI collects and learns not only players’ in-game actions but also their conversation styles, game patterns, and even information outside the game, and if that data is not adequately protected, there is a risk of players’ personal information being leaked externally or commercially exploited. Especially since children and adolescents are the main consumers of games, separate privacy policies to protect them are also necessary.
Another challenge to address is that biases inherent in data can be reflected in game characters during AI’s data-based learning process. AI can reinforce gender, racial, and cultural stereotypes present in training data. Biased behaviors and dialogues may appear through game characters. Similar to the past AI chatbot Iruda incident, biased social messages could be conveyed within games or unfair treatment among players could occur. Even if unintentional, there is concern that players might unconsciously encounter negative values within the game.
To resolve ethical issues, major domestic game companies are preparing countermeasures. Nexon established an AI Ethics Committee this year. Members from various departments voluntarily participate and discuss AI ethical issues. Regularly held roundtables establish and monitor AI ethical guidelines and codes of conduct. Nexon has adopted the Privacy by Design principle, which is the concept that companies protect data at every stage of handling personal information. Based on this, privacy protection measures are being prepared from the initial design phase.
NCSoft has also established its own AI ethical guideline called the ‘AI Framework’ and is reflecting it in game development. This is a kind of guideline for ethical use development principles, allowing review of three core values: data protection, non-bias, and transparency.
Recently, the newly spun-off ‘NC Research’ AI Data Division has already formed a ‘Red Team’ to build a defense system preventing AI models from causing ethical problems. NCSoft is particularly introducing ‘Safety Fine-tuning’ technology in AI development. As AI learns from data, it acquires various patterns and knowledge, but there is a possibility of learning violent or hateful content. Safety Fine-tuning plays a role in strengthening data filtering to prevent AI models from learning negative data.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[Game Character New Weapon AI] ③ "AI May Control Gamers"... Ethical Issues Engulfing the Industry](https://cphoto.asiae.co.kr/listimglink/1/2024102207231267356_1729549392.png)
![[Game Character New Weapon AI] ③ "AI May Control Gamers"... Ethical Issues Engulfing the Industry](https://cphoto.asiae.co.kr/listimglink/1/2024102117180766958_1729498687.jpg)
!["I'd Rather Live as a Glamorous Fake Than as a Poor Real Me"...A Grotesque Success Story Shaking the Korean Psyche [Slate]](https://cwcontent.asiae.co.kr/asiaresize/183/2026021902243444107_1771435474.jpg)
