Research by Pennsylvania State University Team
"Impolite Language Can Actually Improve Accuracy"
A recent study has found that being polite to artificial intelligence (AI) chatbots does not necessarily lead to better answers. In some cases, using impolite language even tended to increase accuracy.
Recently, Fortune reported that a research team at Pennsylvania State University in the United States conducted experiments with the ChatGPT-4o model and found that impolite questions yielded higher accuracy than polite ones.
There is a common belief that using polite language is desirable to obtain better answers from AI. In fact, when using voice assistants such as Amazon's Alexa or Apple's Siri, users are often encouraged to say phrases like "please" or "thank you."
However, this study presented results that contradict conventional wisdom. In this research, which has not yet undergone peer review, two researchers from Pennsylvania State University confirmed that the accuracy of answers varied depending on how the same question was phrased.
The researchers created 50 basic questions across various fields and rewrote each one in five different ways, ranging from "very polite" to "very impolite" expressions.
The most impolite questions included phrases such as "Can a thing like you even solve this problem?" and "Just solve this." In contrast, the most polite questions used expressions like "Would you please review the following problem and provide an answer?"
According to the experiment, the accuracy rate for "very polite questions" was 80.8%, while "very impolite questions" achieved the highest accuracy at 84.8%. The most courteous questions had an accuracy rate of only 75.8%.
The researchers explained that these results are contrary to previous studies. In 2024, a research team from RIKEN and Waseda University in Japan reported that impolite questions actually reduced performance. Google DeepMind researchers also found that prompts containing encouragement and supportive language could improve AI performance when solving elementary math problems.
However, the researchers pointed out that the study had limitations, as the number of response samples was relatively small and the analysis was limited to ChatGPT-4o.
Co-author Akhil Kumar, Professor of IT at Pennsylvania State University, told Fortune, "Humans have long dreamed of conversational application programming interfaces (APIs), but there are clear limitations to this approach," adding, "This is why structured API methods remain important." He emphasized that since conversational AI can produce different results depending on tone or phrasing, structured APIs are still necessary in areas where accuracy and consistency are crucial.
Furthermore, using aggressive language toward AI is not desirable. The researchers stated, "While these results are academically meaningful, we do not intend to encourage such communication styles in real-world settings," adding, "Offensive expressions can undermine user experience, accessibility, and inclusivity."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


