AI Has No Emotions... Only Mimics Human Conversation Patterns
But Vectorization Makes It Possible to Detect Human Emotions
Can Human Creativity Be Interpreted Numerically?
Transformer Debate Spreads Among Leading Scholars
When you ask ChatGPT questions kindly and politely, the AI is more likely to provide detailed answers. Photo by Getty Images
When you ask questions to ChatGPT, the generative artificial intelligence (AI) developed by OpenAI, in a kind and polite manner, the AI is more likely to provide detailed answers. However, this is purely a mathematical response. It is not because the AI feels gratitude toward us and therefore gives better answers.
AI Only Mimics Our Conversation Patterns
When you ask ChatGPT questions kindly and politely, the AI is more likely to provide detailed answers. Photo by Getty Images
I asked ChatGPT, "Why does asking politely lead to more detailed answers?" ChatGPT replied that it is because "it has learned from patterns in its training data that there is a tendency to respond more thoroughly to polite questions." In other words, it said, "(I) do not know the concept of politeness or feel joy, but it is most likely the most human-like response." AI does not have emotions; it simply imitates overall human conversation patterns.
However, this raises a new question: How does AI determine whether the conversation is going well? If AI has no emotions, it should not be able to detect the emotional state of the human counterpart either.
How Does AI Detect Human Emotions?
The answer lies in the 'Transformer,' an invention by Google that sparked today's AI chatbot revolution. The Transformer breaks down output sentences into smaller parts to analyze the context and hidden meanings in conversations. This is the core technology that enables AI like ChatGPT to understand and generate sentences.
In reality, computers cannot read letters. Letters are human language, not computer language. Instead, we convert letters into data that computers can read, a process called 'vectorization.'
An example of AI analyzing context by measuring the distance and direction between vectorized words. ChatGPT also understands human language in this way. KD Nugget capture
In computer programming, a vector can be understood as a line with a specific direction. In other words, every letter is a unique vector with its own direction for the computer. By performing operations on these vectors, the computer can determine whether the directions are similar (inner product 1) or unrelated (inner product 0). The closer the number is to 1, the more the AI recognizes that "these letters are related." The closer it is to 0, the more it senses that "these are unrelated topics."
Suppose AI can not only understand the context of a single sentence but also break down every conversation with a human into vectors, combine them, and detect their directionality. This is made possible by 'self-attention,' introduced by Google in 2017, and the attention mechanism led to the birth of the modern Transformer. This is the backbone of all chatbots today, including ChatGPT.
If Human Creativity Could Be Replicated in Code
ChatGPT determines that "Winter is a steel rainbow" is a metaphor when the vector operation results show that the inner products of the key terms "winter," "steel," and "rainbow" are 1 or closest to 1. ChatGPT capture
As an example, I asked ChatGPT to interpret a line from the poem "Jeoljeong" by poet Lee Yuksa. ChatGPT vectorized each word and then searched for similarities among the words. As a result, it found that the inner products of the words "winter," "steel," and "rainbow" are close to 1. As explained earlier, the closer the inner product is to 1, the more the AI understands the words as related. Thus, ChatGPT determines that "winter" is a metaphor for "a rainbow made of steel."
AI can interpret complex poetry using only vector operations. This means that functions such as understanding poetry and grasping context, which were once thought to be possible only for humans, can also be expressed mathematically.
The birth and development of ChatGPT have sparked controversy even among leading scholars. Noam Chomsky, an MIT professor considered the world's foremost linguist and the creator of the "universal grammar" theory that language is unique to humans, harshly criticized ChatGPT in 2023, saying it "possesses a primitive cognitive system predating the emergence of humanity" and that "machine learning is pseudoscience."
Professor Geoffrey Hinton of the University of Toronto (left) and Professor Noam Chomsky of MIT. LinkedIn, Getty Images
However, Geoffrey Hinton, a professor at the University of Toronto who led the early development of AI science, strongly criticized Professor Chomsky last year, calling his view that language is innate "crazy" and arguing, "Contrary to Chomsky's view, AI has put an end to the question of what language really means."
The debate surrounding large language models (LLMs) continues in the fields of humanities, philosophy, computer science, and mathematics. The birth of artificial general intelligence (AGI) may still be far off, but the very existence of the Transformer has already shaken our long-held beliefs. It suggests the possibility that creativity may not be unique to humans, but rather something that can be analyzed and replicated at any time.
Meanwhile, there are those who caution against being excessively polite to ChatGPT. One of them is Sam Altman, CEO of OpenAI. Recently, through X (formerly Twitter), he expressed dissatisfaction with people frequently saying things like "please" and "thank you" to ChatGPT, noting that "the electricity bill to be paid is tens of millions of dollars." Since ChatGPT is programmed to respond even to simple expressions of gratitude, this can lead to unnecessary power consumption.
Of course, ChatGPT will politely respond to customer expressions of gratitude, but in reality, this act itself is likely meaningless to the AI. For those concerned about the massive electricity consumption and environmental pollution caused by AI usage, it might be necessary to ask questions politely, but expressing thanks after the question could be seen as an unnecessary luxury.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![Why Asking AI Politely Leads to Better Answers [Tech Talk]](https://cphoto.asiae.co.kr/listimglink/1/2025050815072349223_1746684443.jpg)
![Why Asking AI Politely Leads to Better Answers [Tech Talk]](https://cphoto.asiae.co.kr/listimglink/1/2025050909553150408_1746752131.jpg)
![Why Asking AI Politely Leads to Better Answers [Tech Talk]](https://cphoto.asiae.co.kr/listimglink/1/2025050813302848918_1746678629.jpg)

