AI Scholar Yann LeCun NYU Professor Meeting
"AI Development Backed by Giant Industries, Different from Before"
Unproductive Regulation Due to AI Risk Concerns Should Be Avoided
Big Tech Needs to Open Science, Technology Information, and Open Source
Yan LeCun, a New York University professor regarded as one of the "Four Great Kings of Artificial Intelligence (AI)," recently stated that the current AI investment bubble controversy is different from the past because it is driven by practical industries. He emphasized that concerns about AI risks should not lead to excessive regulation and highlighted openness such as information sharing among companies to accelerate AI development.
Yan LeCun, a professor at New York University and considered one of the "Four Kings" in the field of artificial intelligence (AI), is speaking at a press conference held after the opening ceremony of the Korea-US joint AI research platform, "Global AI Frontier Lab," at the MetroTech Center in Brooklyn, New York, on the 24th (local time). New York=Photo by Kwon Haeyoung
On the 24th (local time), at a press conference held after the opening ceremony of the Korea-US joint AI research platform "Global AI Frontier Lab" at the MetroTech Center in Brooklyn, New York, LeCun said, "Today’s AI development is different from before because there are large industries, and expectations of success are driving innovation."
He explained, "AI technology rapidly advanced in the 1980s, slowed down in the 1990s, and then accelerated from the early 2010s due to investments from industry and interest from young students and researchers. Although there is uncertainty, if it leads to groundbreaking achievements within 5 to 10 years, AI investment may not be a waste."
He also stressed that big tech companies need to engage in cooperation and competition with "openness" as the keyword to accelerate AI technology development and progress.
LeCun said, "Some companies like OpenAI and Google limit the sharing of technological and scientific information to maintain their advantage," adding, "Sharing scientific information and open-source platforms is necessary."
He particularly cited Meta, which released the open-source large language model (LLM) LlaMa code last year, as a model case. He explained that open-source sharing allows everyone to access and develop technology, ultimately speeding up AI development and progress.
Regarding concerns about the risks of AI development, he argued that excessive regulation should not hinder technological advancement.
LeCun pointed out, "A small number of people are raising loud voices about AI risks, exaggerating their actual impact," and added, "Concerns about the existential risks of AI have led to unproductive regulations by some governments, which in turn make AI more dangerous." He continued, "Strongly advancing AI is the right approach," and reiterated, "Basically, scientific information and open-source platforms must be shared."
He emphasized that AI’s positive functions, such as bridging the digital divide, are significant.
He said, "AI can bridge the digital divide by providing access to people who previously could not access technology," emphasizing, "Even in remote areas of Africa or India, people can obtain information using a mobile phone without knowledge of computers or the internet." For example, he envisioned that smart glasses equipped with AI functions could provide real-time translation and interpretation, overcoming language and cultural barriers.
Regarding the timeline for developing Artificial General Intelligence (AGI) with human-like intelligence, he said, "Human intelligence is highly specialized, so AGI does not exist," and predicted, "The emergence of AI with human-level intelligence will not be a single event. Progress will be steady and it could take 20 years or more, as it is much more difficult."
LeCun expressed skepticism about government-led AI investments.
He said, "Big tech has made massive investments in talent, experts, and computing resources," adding, "Operating LLMs like ChatGPT requires enormous computing resources and personnel, and the costs are astronomically increasing." He concluded, "Considering the need for data centers and other infrastructure requiring huge capital for AI development, no country in the world can match the efforts of big tech."
Although governments in the Middle East, China, and Europe are actively promoting AI technology development, he claimed their competitiveness is significantly lower compared to tech companies like OpenAI, Microsoft (MS), Google, and Meta.
He evaluated South Korea as having AI competitiveness.
In a keynote speech before the press conference, LeCun said, "South Korea is the only country alongside the United States where top research is conducted across the entire spectrum?from theory to algorithms, applications, hardware, and even robotics," adding, "It holds an excellent position especially in fundamental technologies of electronics, manufacturing, and robotics."
LeCun is a global authority in the AI field. Along with Geoffrey Hinton of the University of Toronto, Andrew Ng of Stanford University, and Yoshua Bengio of the University of Montreal, he is considered one of the four great AI scholars. He is regarded as an "AI Boomer (development advocate)" standing opposite the "AI Doomer (doomsayer)" who advocates AI regulation. He currently serves as a professor at New York University and chief AI scientist at Meta, the parent company of Facebook.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

