본문 바로가기
bar_progress

Text Size

Close

[A Sip of a Book] An AI Lecture That Couldn't Be Easier

Editor's NoteSome sentences encapsulate the entire content of a book, while others instantly resonate with the reader, creating a connection with the book. We excerpt and introduce such meaningful sentences from books.

This is a book about OpenAI's ChatGPT. It explains terms like reinforcement learning, parameters, tokens, plugins, and APIs in an easy-to-understand way for the general public, introducing the principles of artificial intelligence. It also anticipates the social shocks that AI will bring, emphasizing concerns and the need for countermeasures, thereby offering insights for a deep reflection on the new era.

[A Sip of a Book] An AI Lecture That Couldn't Be Easier

Earlier, I mentioned that ChatGPT uses the Transformer model to find the most plausible next word following the given words. From ChatGPT’s perspective, this was the most plausible answer. In other words, ChatGPT was not trained to answer whether something is true or false. It was trained using the Transformer model to produce the ‘most plausible words.’ ChatGPT can even lie very convincingly. For example, if you ask about Mozart’s cello concertos, it might list five pieces complete with K?chel numbers (the chronological catalog numbers assigned to Mozart’s works). In reality, no cello concertos by Mozart exist, but ChatGPT answers with K?chel numbers because that makes it sound plausible. It feels somewhat like a person with a tendency to fabricate stories. - p.48, from “Lecture 1: Emerging Between Amazement and Fear”


One of the biggest characteristics of large-scale AI is the ‘law of scale.’ As computing power increases, the amount of training data grows, and the number of parameters becomes larger, the performance of large language model AI improves. Performance improves even more when all three grow together. In fact, differences between models are not that significant. In other words, scaling up is the most important factor. This is why the cover headline of Time announcing the emergence of ChatGPT was “The AI Arms Race Is Changing Everything.” Attempts to scale up are appearing one after another like an arms race. This is why ChatGPT was trained with a staggering 175 billion parameters, 5 trillion documents, and 10,000 A100 GPUs. - pp.67-69, from “Lecture 2: Why Did We Become Enthusiastic About ChatGPT?”


Another reason we became enthusiastic about ChatGPT is that it is the first time in history that machines can be communicated with using the natural language people normally use. In other words, it is the first natural language interface. Until now, to communicate with computers, we had to learn computer languages (machine languages) like C++, Java, or Python separately. But finally, we can command computers in natural language as if talking to a person. - p.105, from “Lecture 2: Why Did We Become Enthusiastic About ChatGPT?”


Park Tae-woong’s AI Lecture | Written by Park Tae-woong | HanbitBiz | 240 pages | 16,700 KRW


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top