Google announced on the 11th (local time) that it will launch its latest artificial intelligence (AI) model, 'Gemini 2.0.'
Google released Gemini 2.0 one year after initially launching Gemini 1.0 in December last year. Earlier, in February, it also unveiled version 1.5.
According to Google, Gemini 2.0 demonstrates outstanding performance. It is a multimodal model equipped with text, image, and video capabilities, optimized for the era of AI agents. The company stated that it has enhanced fast response, natural conversation, and multimodal functions to especially serve as a user agent.
Gemini 2.0 is built on Google's self-developed 6th generation chip (TPU) and Trillium, which goes beyond organizing and understanding information to make it much more useful.
Google plans to provide the model to developers and test program participants and quickly apply it across all products, starting with search.
Additionally, Google explained that by integrating Gemini 2.0 into 'Project Astra,' introduced in May, conversations with users have become more natural, response speed has increased, and memory has been enhanced. Project Astra is an AI service that acts as a personal assistant by seeing, hearing, and conversing via voice like a human.
Demis Hassabis, CEO of Google DeepMind, emphasized, "Gemini 2.0 offers a completely new level of agent-based experience through a combination of diverse functions, more natural interactions, fast response speed, and the ability to handle complex tasks."
Among the Gemini 2.0 product lineup, Google is providing Gemini 2.0 Flash as an experimental model starting today on the developer platform Google AI Studio and the enterprise platform Vertex AI. It is twice as fast as previous models on major benchmarks and can generate multimodal inputs combining text and images. The Flash model is a lightweight version of the Pro model within the Gemini lineup, which includes Ultra, Pro, and Nano models based on parameter size.
Google also unveiled 'Project Mariner,' which assists with complex tasks powered by Gemini 2.0, and 'Jules,' an AI agent for developers. Project Mariner is still in the experimental stage but supports users' complex tasks by understanding and reasoning information on browser screens. Jules helps with coding tasks.
On the same day, Google introduced Deep Research, an AI assistant that helps write research reports. This assistant uses AI to explore complex topics and provides easy-to-understand reports. It performs complex research on behalf of users and proposes multi-step plans. It can deeply analyze related information from the web to deliver results. Deep Research will be available from today in the paid version, Gemini Advanced.
Sundar Pichai, CEO of Google, said, "With new advancements in multimodality, we are getting closer to Google's vision of a 'universal assistant.'"
Google plans to apply Gemini 2.0's advanced reasoning capabilities to its AI search service, AI Overview, enabling it to handle complex questions such as mathematical equations, multimodal queries, and coding. Testing will begin this week, and next year, Google intends to expand AI Overview features incorporating Gemini 2.0 to multiple countries and languages.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.


