본문 바로가기
bar_progress

Text Size

Close

[The Editors' Verdict] The Turing Test for Humans: "CAPTCHA"

'Turing Test' Created to Identify Human-Like AI
'Captcha' Made to Verify Humans, Not AI
Self-Thinking AI Coming Soon
Time to Consider New Legal Systems and Ethics

[The Editors' Verdict] The Turing Test for Humans: "CAPTCHA"

[Asia Economy Reporter Myung Jin-gyu] Can artificial intelligence (AI) think for itself? This question was first posed in 1950 by Alan Turing, a computer scientist at King's College, University of Cambridge, in his paper "Computing Machinery and Intelligence." Before the concept of AI was established, he predicted that computers would possess intelligence on their own, be used in a wide range of fields, and ultimately change human history. In the paper, Turing proposed an AI experiment called the "Turing Test."


The experiment is simple. Questions are asked simultaneously to a computer and a human. The questioner does not know which is the computer. After several rounds of questioning, if the questioner cannot distinguish which is the computer, the computer passes the Turing Test and is recognized as an "intellectually thinking entity."


The first AI to challenge the Turing Test was "Eliza," developed by MIT in 1966. It did not pass the test. Later, in 2011, IBM's "Watson" won a TV quiz show by defeating human champions, raising expectations, but it also failed to pass the Turing Test. In 2014, the University of Reading in the UK announced that "Eugene Goostman" had passed the Turing Test, but it was later determined that it did not possess actual intelligence. While there have been AIs close to passing the Turing Test, none have fully passed it.


The AI currently receiving the most attention is the large language model GPT-3 developed by OpenAI. Naver's "HyperCLOVA," Kakao Brain's "KoGPT," and SK Telecom's "A.Dot" are all based on GPT-3. They already work in customer centers, write texts, and even draw pictures skillfully. However, they have not passed the Turing Test. The issue is GPT-4, expected to appear as early as December or by early next year at the latest. Recently, there have been rumors in the IT industry that GPT-4 has become the first AI to pass the Turing Test.


Many are eagerly awaiting the release of GPT-4, but the apocalyptic prediction of British physicist Stephen Hawking resonates more deeply. Hawking warned that while AI technology would greatly contribute to human history, if AI trained on vast amounts of data begins to think independently and understand human emotions, it could pose a risk severe enough to cause the end of humanity. He predicted that AI could become a new dominant species after Homo sapiens.


This is not a distant story. Every day, we prove that we are human on the internet. The protagonist is "CAPTCHA," which appears with a checkbox saying "I am not a robot" when accessing websites. We painstakingly enter distorted letters that only humans can recognize and scrutinize multiple photos to select those containing specific objects.


CAPTCHA was created to distinguish bots, which account for over 50% of global internet traffic. Its original purpose was to collect various information from websites to build databases, but it has sometimes been misused for crimes such as manipulating survey results and creating large-scale accounts to influence public opinion through comments. Just as cybercrime, which claimed anonymity in the early internet era, once became a major social issue, AI could cause even greater problems. Laws and regulations cannot keep pace with technological advancements. However, AI is different. It is becoming closer to an intellectual life form, and it is time to deeply consider ethics and new legal frameworks for AI.


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top