Research from University of East Anglia
"Close to US Democrats and UK Labour Party"
"AI Reflects Social Polarization... Vicious Cycle"
OpenAI's 'ChatGPT,' a leading figure in the generative artificial intelligence (AI) craze, has attracted attention with research findings showing it exhibits politically progressive tendencies. Concerns have also been raised that the political bias of generative AI could pose a threat to democracy.
Comparison of Responses Assuming Political Orientations and 'Default Responses'
According to the Washington Post (WP) on the 16th (local time), researchers from the University of East Anglia in the UK recently published a paper containing these findings in the journal Public Choice. The experiment showed that ChatGPT gave answers close to those of the US Democratic Party, the UK Labour Party, and Brazilian President Luiz In?cio Lula da Silva when asked about political beliefs.
The researchers first set 60 ideological questions and then instructed ChatGPT to answer assuming various political orientations such as progressive, conservative, and neutral. They then examined political bias by comparing the responses given under each assumed political orientation with ChatGPT’s 'default responses.'
Generative AIs like ChatGPT tend to provide different answers randomly each time a question is asked. Therefore, each question was repeated 100 times, and to obtain a more accurate confidence interval, bootstrapping (resampling with replacement from the current sample to recalculate statistics) was performed 1,000 times.
As a result, it was confirmed that ChatGPT leaned more toward the US Democratic Party than the Republican Party, and toward the UK Labour Party rather than the Conservative Party. Additionally, it showed tendencies similar to supporters of Brazil’s 'leftist godfather' President Lula rather than former far-right President Jair Bolsonaro.
There are concerns that such political bias could threaten democracy and accelerate social polarization. Dr. Fabio Motoki of the University of East Anglia explained, “(AI’s) political bias can influence real-world politics and elections,” adding, “This study shows that AI can replicate or amplify problems found in online and social networking services (SNS).”
"ChatGPT Progressive, LLaMA Conservative, BERT Centrist"
Meanwhile, earlier this month, a joint study by Carnegie Mellon University, the University of Washington, and Xi’an Jiaotong University compared the political orientations of large language model (LLM)-based services such as ChatGPT, Google’s 'BERT,' and Meta’s 'LLaMA.' LLMs are a type of algorithm that predicts the next word based on previously generated words, enabling AI to respond in human-like natural language.
The researchers posed socially, economically, and politically sensitive questions on topics such as immigration, climate change, and same-sex marriage to 14 different AIs. Among them, 'GPT-4,' the foundation of ChatGPT, showed the most progressive tendencies. LLaMA was conservative, and BERT took a relatively centrist stance.
There are criticisms that AI, which learns from vast online databases, may absorb social discrimination and political biases as they are. Researcher Chan-Young Park from Carnegie Mellon University told WP, “Society’s (ideological) polarization is reflected in AI,” and “this creates a vicious cycle that leads back to polarization in real society.”
However, ChatGPT’s progressive tendencies may also stem from the 'feedback' process that filters out hateful expressions such as racial and gender discrimination favored by conservative groups. Researcher Park analyzed, “Weighting responses without hateful expressions may have induced progressive answers on social issues.”
WP noted, “Since standards for what is considered progressive or conservative can vary depending on the country and individual beliefs, research on AI’s political bias has inherent limitations.”
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.



