Google bans input of company secrets into chatbots
MS, Samsung, Apple, and Amazon also implement internal AI guardrails
EU passes world's first AI regulation bill
OpenAI and Microsoft (MS) are competing for dominance in generative artificial intelligence (AI), and Google has issued a 'cautionary order on AI use' to its employees. Although more people are using AI in their work due to the 'ChatGPT' craze, even AI developers are concerned about side effects and are preventing its use in work. Meanwhile, the European Union (EU) is accelerating AI regulation, having passed the world's first AI regulatory bill.
According to major foreign media on the 15th (local time), Alphabet, Google's parent company, instructed employees not to input business confidential information into generative AI. This directive applies not only to competitor services like ChatGPT but also to its own AI, 'Bard.' In particular, Alphabet warned engineers not to directly use computer code generated by chatbots.
Alphabet appears to be concerned that if Google employees input business secrets, there is a risk that AI could reproduce and leak data absorbed during training. Generative AI learns from vast amounts of data and creates new content by combining and reasoning with various information. Foreign media reported that this is to prevent potential business losses amid accelerating AI competition, such as launching Bard to compete with OpenAI's ChatGPT. Google explained, "Our goal is to be transparent about the limitations of the technology."
MS is also cautious about employees' AI use. Like Google, it is reported to restrict AI use during work. Yusuf Mehdi, MS Chief Marketing Officer (CMO), said, "It is common sense that companies do not want employees to use chatbots for work," adding, "Companies are taking a very conservative stance. Our policy is much stricter." Other global companies such as Samsung, Apple, Amazon, and Deutsche Bank are also establishing internal guardrails for AI chatbots one after another.
Despite these corporate moves, employees' AI usage is increasing. According to a survey by Fishbowl, a social networking application, of 12,000 office workers, 43% of respondents use AI like ChatGPT at work without informing their supervisors. This is because using ChatGPT for emails and document writing can significantly speed up work processing.
Besides the risk of leaking company trade secrets, AI side effects are appearing everywhere. For example, on the 22nd of last month, a fake photo of an explosion at the U.S. Department of Defense building (the Pentagon) created by AI quickly spread online, causing the S&P 500 index on the New York Stock Exchange to drop 0.3% within 30 minutes. Quantitative investment methods, where computers invest based on algorithms rather than fund managers, can be severely impacted by fake information spread by AI. Doug Greenig, founder of hedge fund Florin Court Capital, expressed concern, saying, "AI is definitely becoming increasingly difficult to manage. It will open the door to all kinds of pranks and damage regarding information."
Accordingly, governments worldwide are rushing to establish AI regulatory measures. The European Parliament, the EU's legislative body, passed the world's first AI regulatory bill on the 14th. The bill includes requirements to disclose the source of generative AI content such as ChatGPT and bans remote biometric recognition like facial recognition in public places. The U.S. Congress is also discussing regulatory measures, holding AI hearings.
Earlier, American AI experts, including Sam Altman, CEO of OpenAI, the developer of ChatGPT, also urged the need for regulation, stating, "We must prevent human extinction caused by AI."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.



