Gemini AI Model Now Running in Seoul Region
Latest TPU Computation Also Supported
Google Cloud is now processing computation tasks for its latest large language model (LLM), Gemini 2.5, on domestic servers. The company is also supporting AI transformation (AX) in both the private and public sectors through its proprietary Tensor Processing Unit (TPU).
On July 8, Google Cloud held 'Google Cloud Day Seoul' at the COEX Convention Center in Gangnam-gu, Seoul, where it presented its vision for artificial intelligence (AI).
Jiseong Ji, President of Google Cloud Korea, is presenting at the 'Google Cloud Day Seoul' media briefing held on the 8th at the COEX Convention Center in Gangnam-gu, Seoul. Provided by Google Cloud
Jiseong Ji, President of Google Cloud Korea, stated, "We have launched the Gemini 2.5 model in the Seoul region," and explained, "It is not just the data that is located in the Seoul region; the machine learning process itself is implemented in the Seoul region." Previously, data entered by users for Gemini services was sent to overseas regions (data centers) for AI computation, but now, this process is handled domestically.
Google Cloud established the Seoul region in 2020. Equipped with large-scale infrastructure, the Seoul region includes servers, silicon chips, storage devices, and network equipment. It is also connected to Google’s private network, which links more than 200 countries.
TPUs, which are essential for running AI services, are also provided through Google Cloud’s services. The TPU is a computation device developed by Google, specialized for large-scale data analysis and deep learning. Earlier, at Google Cloud Next held in April, Google Cloud unveiled Ironwood, its seventh-generation Tensor Processing Unit (TPU). Ironwood, designed for large-scale AI inference, is expected to be officially released soon.
Regarding Ironwood, Ji stated, "We are entering an era of inference, where AI models shift from simply providing information that humans need to interpret in real time, to models that proactively generate interpretation and insights." He added, "We have moved beyond simple data, and now AI agents collaborate to proactively search for and generate insight-driven answers."
Google’s LLM Gemini 2.5 and protein structure prediction model AlphaFold have also utilized TPU computation. Ji said, "With Ironwood, we look forward to the innovations that developers in Korea, as well as organizations in the private and public sectors, will achieve."
Following Ji’s presentation, Youngjun Yoo, Chief Operating Officer (COO) of RYUTEN Technologies, explained, "The LLM to be used for new services had to meet several criteria, including strong performance, reasonable pricing, and high stability." He added, "The Gemini 2.5 model family met these standards in RYUTEN’s own tests, and is currently being used alongside other LLMs in various parts of our services."
At the event, companies such as NolUniverse, LG Uplus, NC AI, and Kakao Mobility set up booths in the Gemini Playground to showcase AI-based services developed using Google’s models.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![Clutching a Stolen Dior Bag, Saying "I Hate Being Poor but Real"... The Grotesque Con of a "Human Knockoff" [Slate]](https://cwcontent.asiae.co.kr/asiaresize/183/2026021902243444107_1771435474.jpg)
