Concerns Over Personal Data Leakage and Misuse Grow
Lobby Team Formed and FDA Veteran Recruited
"Explaining How AI Works"
As Google expands its medical artificial intelligence (AI) business, often called the "doctor in your pocket," it is focusing on persuading the U.S. Congress and the Biden administration. Amid growing concerns about the leakage or misuse of personal information in medical AI, attention is on whether Google can convince legislative and administrative authorities who are establishing AI regulations.
According to Politico on the 4th (local time), as regulatory authorities began reviewing the establishment of AI management regulations, Google immediately formed a lobbying team to manage this and has been engaging with the U.S. Congress and the Biden administration to explain how the technology works. To this end, Alphabet, Google's parent company, has also hired several former FDA officials, including Bakul Patel, former FDA Digital Health Global Strategy Lead.
Google has been developing medical AI services since 2019 through a partnership with the Mayo Clinic in the U.S., and in June launched a customized chatbot service called "Enterprise Search." This chatbot can quickly analyze patients' medical histories, imaging medical records, genetic characteristics, test results, and more. HCA Healthcare, which operates over 2,000 hospitals in the U.S. and the U.K., reportedly uses Google's medical AI to help doctors and nurses document clinical cases.
The medical AI market is rapidly expanding. According to the U.S. Food and Drug Administration (FDA), the adoption rate of AI-based medical devices is expected to increase by more than 30% compared to last year. While the medical community is eager to actively use AI in areas such as medical records, research papers, and image review, political circles express significant concerns about the potential leakage or misuse of sensitive personal information.
There are particular worries about monopoly since Google holds vast amounts of personal information through various services. Senator Mark Warner, a Democrat and member of the Senate Intelligence Committee, recently sent a letter to Sundar Pichai, Google's CEO, expressing concerns that hospitals are using Google's AI without sufficient verification. Warner told Politico, "While these tools have the potential to save more lives, they also have the potential to do exactly the opposite."
In response, Mark Isakowitz, Google's Head of North America Government Affairs, stated that Google's technology uses individuals' health information in a limited way without fully learning it and that monitoring is in place.
Given this situation, President Biden has instructed relevant authorities to develop measures to ensure that AI-powered services in the medical field can be used by doctors and patients. However, while the FDA approves medical devices using AI, there are currently no regulations applicable to AI chatbots and similar technologies, leaving regulatory frameworks largely undeveloped.
Google's efforts to persuade the U.S. Congress and the Biden administration appear to stem from past experiences. Previously, in 2019, Google attempted to apply big data technologies while analyzing tens of millions of medical records with the U.S. hospital Ascension but faced an investigation by the Department of Health and Human Services regarding personal information protection. Politico reported that Google seems to be trying to preemptively persuade Congress and the Biden administration to avoid conflicts with regulatory authorities.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.



