This Year's WWDC to Focus on AI Announcements
Expectations for Additional Integration with External LLMs
Apple has unveiled the detailed schedule for its annual developer event, the Worldwide Developers Conference (WWDC) 2025, which will be held on June 9 (local time) next month. This year, Apple is expected to focus on enhancing artificial intelligence (AI) capabilities at the event.
On May 21, Apple announced the schedule for WWDC 2025, which will take place online from June 9 to June 13. This year's WWDC will begin with Apple's keynote on the morning of June 9, followed by sessions for developers. Held every June, WWDC is an event for developers where Apple has traditionally announced the next versions of operating systems such as macOS, iOS, and iPadOS for its devices.
This year, as with last year, AI is expected to be the main topic at WWDC. At last year's WWDC24, Apple introduced its AI service "Apple Intelligence" for the first time. Apple Intelligence currently supports features such as notification summaries, text editing, and basic image generation and editing on supported Apple devices.
According to major foreign media outlets including Bloomberg, Apple is expected to announce plans at this WWDC to allow other developers to utilize its proprietary small language model (SLM) for software development. Some tasks processed by Apple Intelligence are run "on-device," meaning they are handled directly on the device rather than on a server. Apple reportedly plans to open up its lightweight small models first, enabling AI tasks to be performed on devices.
The AI features Apple unveils at this WWDC are expected to become a critical turning point in future competition. Apple Intelligence has been criticized for lagging behind competing AI services such as Google's Gemini and Samsung's Galaxy AI in terms of both launch timing and features. Devices supporting Apple Intelligence are also considered less versatile. While Apple supports Apple Intelligence starting from the relatively recent iPhone 15 Pro, Samsung supports Galaxy AI from the older Galaxy S21 series.
The launch of the core feature of Apple Intelligence, "more personalized Siri," which was originally scheduled for this year but has been postponed to next year, is reportedly due to difficulties encountered during AI development. The personalized Siri operates by having the AI model learn from the user's personal data, such as text messages, emails, calendars, notes, and photos. Based on this, the voice assistant Siri develops the ability to provide customized responses to user requests.
Another key point to watch is whether Apple will expand integration with external large language models (LLMs), which is currently limited to ChatGPT. Some tasks in Apple Intelligence are processed through OpenAI's ChatGPT. When Siri receives a question from a user and determines that ChatGPT would provide a better answer, it seeks the user's consent before providing a response via ChatGPT.
It is also reported that Apple is considering adding Google's Gemini to Apple Intelligence. Sundar Pichai, CEO of Google, has also mentioned that he hopes Gemini will become a default option on iPhones within this year.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


