Professor Park Jiseop's Team: "Trust Varies Depending on Voice Guidance Method"
Professor JiSeop Park's research team at Korea University of Technology and Education discovered the key factors in resolving ethical dilemmas in autonomous vehicles.
A research team led by Professor Park Jiseop at Korea University of Technology and Education (KOREATECH) has announced research findings showing that passengers’ ethical decisions and trust in vehicles can vary depending on the type of artificial intelligence (AI) voice guidance provided in autonomous vehicle accident scenarios.
According to KOREATECH on July 31, this study analyzed the correlation between ethical decision-making in autonomous vehicles, human psychological responses, and the process of building technological trust. The results were published in the July online edition of 'Accident Analysis & Prevention,' the world’s leading academic journal in the field of human factors engineering. This journal ranks in the top 2.4% according to JCR standards.
The study was jointly conducted by Professor Park Jiseop, Instructor Yoo Youngjae (Graduate School of Information, Yonsei University), and Professor Kim Heon (Department of Media, Hanyang University ERICA). The team implemented a 'trolley dilemma' scenario?one that autonomous vehicles could realistically face?within a virtual reality (VR) environment for their experiments.
The experiment involved 48 participants, who were asked to make a decision within 5 seconds in a scenario where a sudden sinkhole appeared on the road. The options were: go straight (resulting in the driver’s death), turn right (putting three pedestrians at risk), or turn left (putting two people in an oncoming vehicle at risk).
The research team found that participants’ tendencies for ethical choices and their trust in autonomous driving systems changed significantly depending on how the AI voice agent conveyed the situation information.
In particular, a 'prevention-focused message' such as "If you go straight, you will be in danger" led to more ethical decisions and higher system trust than a 'promotion-focused message' such as "If you turn right, you can save everyone."
Additionally, trust in the vehicle and willingness to purchase were higher in systems where the user retained the final decision-making authority, compared to systems where the AI made all decisions independently.
This demonstrates how important it is to guarantee human autonomy and responsibility in the commercialization of autonomous vehicle technology.
Lead author Yoo Youngjae stated, "Participants had to make ethical decisions in just 5 seconds, and the type of AI guidance message had a significant impact on their choices."
Professor Park Jiseop emphasized, "Autonomous vehicles should not only focus on technical perfection, but also evolve in a way that respects and supports human judgment. AI should not be a substitute for human decision-making, but rather a tool to assist human choices."
Professor Park expects that by linking the 'VirtualGraph (VG)' technology?which he first proposed in 2019?with this research, it will be possible to conduct a more precise analysis of the cognitive and decision-making structures of autonomous vehicle passengers under more realistic conditions.
VG is a next-generation cross-modal virtual reality technology that, through human sensory and cognition-centered design, enables the brain to perceive experiences as real physical stimuli even within technical limitations.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

