Apple Focuses on Upgrading NPU Performance Over CPU
Famous Columnist: "The Core of iPhone 15 Pro is NPU"
Chip and OS-Level Support Accelerates On-Device AI Competition Including LLM
Apple cannot ignore Nvidia's rapid advance based on AI semiconductors. Nvidia, a fabless company that only designs semiconductors, has seen its market capitalization soar to 30% of Apple's. Although the two companies are incomparable in sales scale, this is thanks to the explosive demand for Nvidia's semiconductor H100 and H200 chips needed for generative (LLM) AI training. While Apple was embroiled in various controversies such as overheating after the iPhone 15 announcement, Nvidia has reported results exceeding market expectations due to surging demand.
The generative AI boom sparked by ChatGPT and Nvidia's rapid rise are also changing Apple's stance.
Apple Shockwave previously reported that Apple uses the term machine learning instead of artificial intelligence (AI). Apple's policy was to focus on 'on-device' AI that operates on iPhones, MacBooks, and iPads themselves rather than cloud-based LLM AI.
Apple's stubbornness is well known. Even though all smartphone makers had long adopted USB-C charging ports, Apple stubbornly maintained its proprietary Lightning port and only applied USB-C on the iPhone 15 after regulations from the European Union (EU) came into effect.
Even Apple can no longer insist on machine learning amid the rapid advancement of AI. With Nvidia's GPU-based AI rapidly evolving into Artificial General Intelligence (AGI) comparable to human intelligence, Apple, which holds half of the smartphone market, recognizes it cannot fall behind.
Now, most smartphone chip and manufacturer companies, including Apple, Qualcomm, and Samsung Electronics, are expected to include LLM functions in their self-developed application processors (AP) and support them at the operating system level to launch full-fledged services.
Apple CEO Tim Cook publicly declared AI enhancement as if he could no longer hold back. At the earnings announcement on November 3, 2023, Cook effectively acknowledged that Apple is developing LLM. Apple is also expected to include AI functions in the developer tool 'Xcode' so that external app makers can utilize iPhone's AI capabilities.
To do this, AI function support at the semiconductor level is essential. Apple's latest chip, the A17 Pro, does not support LLM, but Apple has continuously supported AI functions by applying the Neural Engine (NPU) to the A-series chips.
If Apple supports LLM, support at the OS level and coalition of chips are likely. The core of this is the Neural Engine.
Internal view of the A17 Pro chip unveiled by Apple. It states that the Neural Engine responsible for AI functions is twice as fast as before. Source: Apple
Apple equipped the Neural Engine starting with the A11 chip used in the iPhone X, the 10th-anniversary iPhone model. Apple's world-first 3nm-based consumer device semiconductor, the A17 Pro, has been criticized for CPU performance improvements falling short of expectations, but the Neural Engine is different. Apple said the Neural Engine in the A17 Pro is twice as fast as in the previous A16. The first Neural Engine could perform 600 billion operations per second, but the A17 Pro can perform 35 trillion operations per second. This is about a 58-fold performance improvement. This means the capacity to support AI functions has significantly increased.
Tim Bajarin, an American IT specialist columnist, also pointed out that the most notable part of the iPhone 15 Pro is the advancement of the Neural Engine. He confidently states that the Neural Engine of the A17 Pro chip is one of the most important parts in semiconductors alongside the GPU. If Apple focused on strengthening the Neural Engine and GPU rather than the CPU, the future target is clear: AI.
Bajarin diagnosed that although some media point out Apple is lagging in the AI race, Apple, which aimed for on-device AI rather than server-based AI, can process cloud server-based AI functions faster by strengthening the Neural Engine.
Even if iOS 18 supports LLM, it will be difficult for all iPhones using the same OS to support identical functions. Apple has limited new app-level services depending on chip performance. A representative example is 'Sing' provided by Apple Music. This function removes only the singer's voice while playing music. It essentially turns the iPhone into a karaoke machine.
'Sing' is a representative AI function of the iPhone, but not all iPhones can use it. Apple has allowed this function only on iPhones and iPads using chips from A13 onwards. This is because the Neural Engine performance included in the A chips used in iPhones differs. Even the current latest A17 Pro may not support Apple's LLM. This is likely because LLM requires the power of a more advanced Neural Engine.
Although AI functions have developed faster than CPU functions, consumers still find it difficult to recognize this. To utilize AI functions on smartphones, enhancement of AI assistant functions is necessary. This is why Apple plans to significantly expand the functions of the existing AI assistant 'Siri.' If Siri performs proper LLM functions, over a billion iPhone users can instantly experience AI performance. Compared to LLM, which requires asking questions through separate apps or websites, consumers will have easier access to AI.
Bajarin estimated that the Neural Engine will play a major role in enhancing Siri's performance through AI. He especially expressed that the criticism that Apple is lagging in AI is problematic. He recalled that renowned AI scholar Kai-Fu Lee was already working on AI-based voice recognition at Apple in 1991. He added that Lee had a significant influence on Apple's early AI research. Prior to that, Apple had been researching the concept of a knowledge navigator based on AI algorithms since 1987. This was a concept advocated by then-CEO John Sculley. Bajarin emphasized that Apple is already applying AI to various products, apps, and services.
Apple's AI offensive is expected to appear at the Apple Worldwide Developers Conference (WWDC) to be held in June 2024.
Qualcomm and Samsung cannot fall behind. Qualcomm, the leader of Android camp chips, recently declared that the core of the 'Snapdragon 8 Gen 3' chip is the Neural Engine. Samsung also announced that the Neural Engine of the new Exynos 2400 chip is 14.7 times faster than before and is previewing the launch of its self-developed LLM called 'Gauss.'
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
 Nvidia Revolution Awakens Apple AI Instinct](https://cphoto.asiae.co.kr/listimglink/1/2023120311300474628_1701570604.png)

