Google Expands Collaboration with Samsung on Smart Glasses
Unveils AI Tools for Filmmaking and Enhanced Search
Google, the world's largest search engine company, has launched a counteroffensive against the rise of generative artificial intelligence (AI) such as ChatGPT by fully integrating AI features into its search services.
On May 20 (local time), Google held its annual developer conference (I/O) in Mountain View, California, where it announced new search features heavily incorporating generative AI.
Google has significantly expanded its AI-powered search summary feature, previously called 'AI Overview,' by rebranding it as 'AI Mode.' AI Mode is a search function powered by Google's latest AI model, Gemini 2.5. Sundar Pichai, CEO of Google, stated in his keynote address, "AI Mode is a new feature designed for users who want an end-to-end experience, with AI handling search, analysis, summarization, and result delivery," adding, "This represents the future of search, moving from information to intelligence."
'AI Mode' now combines multimodal capabilities such as text summarization, image analysis, and video comprehension, in addition to traditional text input, by supporting voice and video inputs. Users can interact with the search engine as if conversing with an AI chatbot, not only by entering keywords but also by submitting sentences and follow-up questions.
The real-time search feature 'Search Live,' which uses a smartphone camera, has also been integrated. If users show something they are curious about while performing a task, the AI will instantly provide relevant information or explain it via voice.
Beyond standard keyword-based search, a new 'Deep Search' function has been added, allowing AI to understand user queries, autonomously explore vast amounts of information, and deliver comprehensive reports or in-depth answers.
Additionally, the agent feature called 'Project Mariner,' introduced last year, has been incorporated. This allows the AI to handle a series of tasks such as booking tickets, making restaurant reservations, and applying for services.
AI Mode is available to all users in the United States starting today and will be expanded to other countries in the future. However, Google has not disclosed a specific timeline for its rollout to other countries.
Google, which has been developing extended reality (XR) headsets based on Samsung and the Android operating system, is also collaborating with Samsung to develop smart glasses. Google announced that it will develop smart glasses integrating the Android XR operating system in partnership with Samsung Electronics and Korean sunglasses company Gentle Monster.
This marks Google's return to smart glasses development for the first time in 10 years, since it discontinued its 'Google Glass' smart glasses two years after their launch in 2013. The new smart glasses will be equipped with a camera, microphone, and speaker, and will sync with smartphones, allowing users to answer calls, send messages, and use apps without taking their phone out of their pocket.
Notably, the 'Gemini Live' feature will be included, enabling AI to recognize what the user sees and hears through the camera, provide relevant information, and remind the user of important matters. Gemini Live is also available as an app for Android and iOS smartphones starting today. Google explained that the smart glasses will also feature real-time translation, enabling two people speaking different languages to communicate naturally.
In addition, Google unveiled 'Veo 3,' which adds audio capabilities to its existing video generation AI model 'Veo 2,' and introduced 'Imagen 4,' its latest image generation AI model with maximized clarity. The company also announced the launch of 'Flow,' an AI filmmaking tool that integrates Veo, Imagen, and Gemini to create cinematic scenes and stories.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


