본문 바로가기
bar_progress

Text Size

Close

From AI Scholars to Far-Right Figures: "Superintelligence Development Must Stop"

"The Greatest Threat Is Not Companies or Nations, but AI"

On October 22 (local time), CNBC and other media outlets reported that artificial intelligence (AI) and technology experts, as well as far-right figures, business leaders, and other prominent individuals from various fields, have signed a joint statement calling for a halt to efforts to develop 'superintelligence.'


The joint statement, released that day by the nonprofit Future of Life Institute (FLI), was signed by more than 800 people, including Apple co-founder Steve Wozniak, Susan Rice, who served as National Security Advisor under the Barack Obama administration, renowned AI scholar Yoshua Bengio of the University of Montreal, Geoffrey Hinton of the University of Toronto, and Stuart Russell of the University of California, Berkeley.

From AI Scholars to Far-Right Figures: "Superintelligence Development Must Stop" Reuters Yonhap News

Other notable signatories include Richard Branson, founder of the Virgin Group; former U.S. Joint Chiefs of Staff Chairman Mike Mullen; Prince Harry, Duke of Sussex; and Meghan Markle, Duchess of Sussex, reflecting broad participation from a wide range of fields.


Steve Bannon, former White House Chief Strategist, and Glenn Beck, conservative political commentator and radio host-both close associates of U.S. President Donald Trump-also joined the initiative. Reuters explained that this signals growing anxiety about AI within conservative circles, at a time when Silicon Valley figures are exerting influence within the Trump administration.


As the U.S.-China competition in AI intensifies following the global shock caused by Chinese AI startup DeepSeek, Chinese scientists such as Turing Award winner Andrew Yao, professor at Tsinghua University, and Zhang Yaqin, former president of Baidu and now director of the Tsinghua University Institute for Artificial Intelligence Industry Research, have also signed the statement.


Superintelligence refers to a stage where AI surpasses human intelligence. Recently, leading companies from xAI to OpenAI have been fiercely competing to release more advanced large language models (LLMs). Meta has even named its LLM division the 'Meta Superintelligence Research Lab.'


However, in the statement, the signatories warned that the emergence of superintelligence "raises a wide range of concerns, from the loss and disempowerment of humanity's economic role, to the loss of freedom, civil rights, dignity, and control, and even to threats to national security and the potential extinction of humanity."


They also called for a ban on the development of superintelligence until there is broad public support and scientific consensus that it can be developed in a safe and controllable manner.


Max Tegmark, founder of FLI, told the Financial Times (FT), "What unites us is our humanity," adding, "More and more people are beginning to think that the greatest threat may not come from other companies or other countries, but from the machines we are creating ourselves."


© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Special Coverage


Join us on social!

Top