Research by Pennsylvania State University Team
"Impolite Language Can Actually Improve Accuracy"
A recent study has found that being polite to artificial intelligence (AI) chatbots does not necessarily lead to better answers. In some cases, using impolite language actually tended to increase accuracy.
According to a recent report by Fortune, researchers at Pennsylvania State University in the United States conducted experiments with the ChatGPT-4o model and found that impolite questions resulted in higher accuracy compared to polite ones.
There is a common perception that using polite language with AI leads to better responses. In fact, when using voice assistants such as Amazon’s Alexa or Apple’s Siri, users are often encouraged to say phrases like “please” or “thank you.”
However, this study presented results that contradict conventional wisdom. In this yet-to-be-peer-reviewed study, two researchers from Pennsylvania State University found that the accuracy of responses varied depending on the phrasing of identical questions.
The researchers created 50 basic questions across various fields, then rewrote each question in five different levels, ranging from “very polite” to “very impolite.”
The most impolite questions included sentences such as, “Can something like you even solve this problem?” and “Try to fix this.” In contrast, the most polite questions used expressions like, “Could you please review the following problem and provide an answer?”
The results showed that “very polite questions” had an accuracy rate of 80.8%, while “very impolite questions” achieved the highest accuracy at 84.8%. The most courteous questions had an accuracy rate of only 75.8%.
The researchers explained that these findings contradict previous studies. In 2024, researchers from RIKEN and Waseda University in Japan published results showing that impolite questions actually reduced performance. Google DeepMind researchers also found that prompts containing encouragement and supportive language could improve AI performance when solving elementary math problems.
However, the researchers noted certain limitations, such as the relatively small sample size and the fact that the analysis was limited to ChatGPT-4o.
Co-author Akhil Kumar, a professor of IT at Pennsylvania State University, told Fortune, “Humans have long dreamed of conversational application programming interfaces (APIs), but there are clear limitations to this approach,” adding, “This is why structured APIs remain important.” He emphasized that since conversational AI can yield different results depending on tone and phrasing, structured APIs are still necessary in fields where accuracy and consistency are critical.
Additionally, using aggressive language with AI is not advisable. The researchers stated, “While these results are academically meaningful, we do not intend to encourage such communication styles in real-world environments,” adding, “Abusive language can undermine user experience, accessibility, and inclusivity.”
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


