Monday, December 23, 2024
ad
HomeNewsResearch Shows Popular AI Language Models Inclined to Political Bias

Research Shows Popular AI Language Models Inclined to Political Bias

OpenAI's GPT-4 has a propensity to favor left-wing libertarianism whereas Meta's LLaMA has a tendency to favor right-wing authoritarianism.

According to a recently released research paper, the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University have recently found that different AI language models have political biases. 

According to the study, which examined 14 large language models, OpenAI’s AI chatbot ChatGPT and the latest LLM version GPT-4 have a propensity to favor left-wing libertarianism whereas Meta’s LLaMA has a tendency to favor right-wing authoritarianism. The researchers asked questions about democracy, feminism, and other themes, and used this information to assess the political slant of these models. 

Unexpectedly, the study discovered that training the models on datasets with various political biases changed their behavior and changed their capacity to recognise hate speech and false information.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The study used a three-stage approach to look at the development of AI language models. The models’ initial responses to politically charged words revealed their innate political leanings. For instance, compared to OpenAI’s GPT models, Google’s BERT models showed a sense of social conservatism. The discrepancy may be explained by the fact that more recent GPT models were influenced by liberal online texts and older BERT models were trained on conservative book sources.

In the following step, datasets containing news and social media posts from both left-leaning and right-leaning sources were used to retrain the GPT-2 and Meta’s RoBERTa models. The biases that these models already had were reinforced by this process.

In the study’s last phase, it was shown how political preferences of AI models affected how well they could categorize hate speech and false information. While models trained with right-wing data were more sensitive to hate speech directed at white Christian men, those trained with left-wing data were more tuned in to hate speech that targeted minority groups.

The research team emphasised the need of comprehending the political biases exhibited by AI language models, particularly as these models are increasingly being incorporated into popular goods and services. Right-wing skeptics have criticized OpenAI, the company that created ChatGPT, claiming that the chatbot represents a liberal viewpoint. 

The public has been reassured by OpenAI that it is actively addressing these worries and instructing human reviewers to refrain from supporting any one political organization while the AI model is being improved. The scientists are nevertheless dubious, claiming that it is doubtful that any AI language model will be totally free of political prejudices.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Sahil Pawar
Sahil Pawar
I am a graduate with a bachelor's degree in statistics, mathematics, and physics. I have been working as a content writer for almost 3 years and have written for a plethora of domains. Besides, I have a vested interest in fashion and music.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular