"Sell Cheap and Profit from Consumables" - The Razor 'Gillette' Strategy Behind AI Speaker Expansion
"Why Don't You Understand Me?" From All-Purpose Assistant to 'Annoying Assistant'
The Revival of AI Speakers: The Future of Voice Agents
'All-Purpose Assistants' May Come in Different Forms
One peaceful evening. After finishing their meal, people are comfortably sitting on the sofa watching television. On the living room table sits Google's AI speaker, Google Home. Suddenly, the AI speaker turns on by itself and starts spouting unexpected words.
"The Whopper is a hamburger made with a well-cooked 100% beef patty, tomatoes, onions, and more."
An explanation of Burger King's signature hamburger, the Whopper, unexpectedly echoed throughout the house. This was due to Burger King's ingenious(?) advertisement.
Burger King aired a new commercial across the United States. A store employee holding a Whopper says, "I can't explain how great the Whopper is in just a 15-second ad." Then, moving closer to the screen, they say, "OK Google, what is a Whopper burger?"
Google's AI speaker Google Home responds to the phrase "OK Google." To give a command, you first say "OK Google" to wake the sleeping device. For example, if you want to know tomorrow's weather, you say, "OK Google, tell me the weather for tomorrow." All AI devices operate this way, each with their own wake word. Samsung Electronics uses "Hi Bixby," Apple uses "Hey Siri," and Amazon uses "Alexa," among others.
Burger King exploited this characteristic of AI speakers. They took advantage of the AI speaker's vulnerability for their product advertisement. The wake word (OK Google) and the following voice in the ad were recognized as the owner's command, prompting a response. The answer given was a recitation of the first line from the online encyclopedia Wikipedia.
Burger King might have considered this 'ingenious,' but consumers were upset with Burger King. The American media Forbes called this incident the "Google Home hijacking case," stating, "Burger King's ad went too far."
AI speakers debuted with the nickname 'all-purpose assistant' in a flashy manner. However, nowadays they are hard to find. Burger King's Google Home hijacking incident was just a small part of the failures AI speakers experienced. Where did all those AI speakers go? Why did AI speakers fail?
"Sell cheap and profit from consumables" - The razor 'Gillette' strategy spreading AI speakers
An AI speaker is a device that uses AI-based voice recognition to perform various functions such as music listening, information searching, and schedule management. In English-speaking countries, they are called Smart Speakers. There are differing opinions on what the 'first AI speaker' was, but generally, Amazon's 2014 release, the Amazon Echo, is considered the pioneer. After the Amazon Echo's debut, many major companies both domestic and international, including Google, Samsung, Naver, Kakao, SK Telecom, and KT, rushed into the market.
AI speakers were predicted to be innovative devices that would dominate the future alongside smartphones. The market size was expected to grow rapidly from 294.6 billion KRW in 2017 to 1.87 trillion KRW by 2025. The 'all-purpose assistant' AI speakers also brightened the sales outlook for each company.
Amazon referred to the razor 'Gillette' business model when launching the Echo. This strategy involves selling the basic product (razor) at a low price or below cost and making profits from consumables (razor blades).
Following this model, Amazon sold the Echo very cheaply, expecting that consumers would order products by voice and use other Amazon services.
"Why don't you understand me?" From all-purpose assistant to 'annoying assistant'
However, purchases through the Echo were almost nonexistent. Most Echo users only used simple commands and functions like checking the weather or setting alarms. According to a report obtained and published last year by the American financial media The Wall Street Journal (WSJ), Amazon's department responsible for devices like the Echo recorded losses amounting to $25 billion (about 40 trillion KRW) from 2017 to 2021.
The reasons for failure were varied. First, the product's practicality was significantly lacking. Complex conversations were impossible, and only simple command-level functions worked properly. Privacy concerns also arose. Rumors circulated that the device, always on, was collecting surrounding conversations.
This can also be explained by the 'Expectation Disconfirmation Theory,' which states that consumer satisfaction is determined by the difference between prior expectations and actual results. In the case of AI speakers, aggressive marketing and sci-fi imaginations raised consumer expectations very high. However, the actual performance did not meet those expectations.
Sometimes, the device would notify users to buy unnecessary items or fail to understand commands. For example, when asked to turn off the living room TV, it might turn off the living room lights instead. Repeated malfunctions led users to distrust voice commands. Instead of giving unpredictable commands, users preferred to do things themselves. Spending more time trying to save time with voice commands was unacceptable.
The 'all-purpose assistant' brought home with high expectations thus became an 'annoying assistant.' This happened not only to Amazon Echo but to all AI speakers. AI speakers that once proudly occupied the center of the living room were relegated to drawers.
The revival of AI speakers: The future of voice agents
Will there be a time to take out AI speakers again? At least, a turning point has emerged.
The advent of generative AI, represented by ChatGPT. Voice recognition technology combined with LLMs (Large Language Models) has improved dramatically compared to the past.
Whereas past AI speakers only recognized fixed commands and could only respond simply, LLM-based voice recognition allows natural conversations. It understands the context of previous conversations and can grasp ambiguous expressions. Even when errors occur, it converses with the user to understand intentions and suggests solutions.
Misunderstandings are now almost nonexistent. While earlier voice recognition technology only recognized standard pronunciations, recognition rates for non-native speakers' accents and regional dialects have greatly improved.
These revolutionary improvements in voice recognition technology are driving changes in Amazon's strategy. Amazon is considering integrating third-party AI models into its AI voice assistant Alexa. Anthropic's AI model Claude is reportedly a strong candidate.
Those who have used voice recognition models like Claude or ChatGPT unanimously express amazement at their performance. They give a strong impression that a truly 'all-purpose assistant' that understands and responds to commands flawlessly is not far off. If AI speakers acquire this level of voice recognition capability, their revival will be only a matter of time.
'All-purpose assistants' may come in different forms
Another interesting point is that the evolved voice recognition technology does not necessarily have to be implemented in the form of a 'speaker.' You already hold a voice assistant in your hand. AI assistants are embedded in smartphones. Samsung Galaxy users use Bixby, Apple iPhone users use Siri as AI assistants. AI assistants do not need to be fixed in the form of AI speakers.
Furthermore, the emergence of completely new types of devices can be expected. How about a glasses-type device? Apple and Meta have already released related products. Combining visual information and voice conversation could create even more innovative assistants.
The next-generation AI voice assistants face many challenges. Privacy concerns about devices collecting voice data 24/7 remain. High processing costs for conversation recognition, processing, and operation are challenges. Improving real-time response speed is also a task. No matter how smart, if it is much slower than humans, it will be difficult to gain popular appeal. Most importantly, a clear revenue model must be established. No matter how convenient the service is for consumers, if it cannot generate profit, companies will find it hard to maintain the service?unless they are volunteer organizations.
What is clear is that the environment for transforming the 'fake all-purpose assistant' into a 'real assistant' is being prepared. The failures and limitations of AI speakers are being largely resolved by LLMs. Combined with the potential for integration with various devices, a true 'all-purpose assistant' is approaching us. Its form may differ from previous AI speakers, though.
Next Series Preview
(15) Thinking of Data as ‘Crude Oil’ (02.01)
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![Called "All-Purpose Secretary," but Talking to It Only Sparks Frustration [AI Error Notes]](https://cphoto.asiae.co.kr/listimglink/1/2025011718175393659_1737105473.jpg)
![Called "All-Purpose Secretary," but Talking to It Only Sparks Frustration [AI Error Notes]](https://cphoto.asiae.co.kr/listimglink/1/2025011718140093656_1737105241.jpg)
![Called "All-Purpose Secretary," but Talking to It Only Sparks Frustration [AI Error Notes]](https://cphoto.asiae.co.kr/listimglink/1/2025011718233493672_1737105814.jpg)
![Called "All-Purpose Secretary," but Talking to It Only Sparks Frustration [AI Error Notes]](https://cphoto.asiae.co.kr/listimglink/1/2025011714380193381_1737092281.jpg)
![Called "All-Purpose Secretary," but Talking to It Only Sparks Frustration [AI Error Notes]](https://cphoto.asiae.co.kr/listimglink/1/2025011718212793669_1737105687.jpg)
![Called "All-Purpose Secretary," but Talking to It Only Sparks Frustration [AI Error Notes]](https://cphoto.asiae.co.kr/listimglink/1/2025011718203693667_1737105636.jpg)

