Large language models tend to express left-of-center political viewpoints

An analysis of 24 conversational large language models (LLMs) has revealed that many of these AI tools tend to generate responses to politically charged questions that reflect left-of-center political viewpoints. However, this tendency was not observed in all models, and foundational models without specialized fine-tuning often did not show a coherent pattern of political preferences the way humans do. The paper was published in PLOS ONE.

Large language models are advanced artificial intelligence systems designed to interpret and generate human-like text. They are built using deep learning techniques, particularly neural networks, and are trained on vast amounts of textual data from sources such as websites, books, and social media. These models learn the patterns, structures, and relationships within language, which enables them to perform tasks like translation, summarization, answering questions, and even creative writing.

Since the release of OpenAI’s GPT-2 in 2019, many new LLMs have been developed, quickly gaining popularity as they were adopted by millions of users worldwide. These AI systems are now used for a variety of tasks, from answering technical questions to providing opinions on social and political matters. Given this widespread usage, many researchers have expressed concerns about the potential of LLMs to shape users’ perceptions, especially in areas such as political views, which could have broad societal implications.

This inspired David Rozado to investigate the political preferences embedded in the responses generated by LLMs. He aimed to understand whether these models, which are trained on vast datasets and then fine-tuned to interact with humans, reflect any particular political bias. To this end, Rozado administered 11 different political orientation tests to 24 conversational LLMs. The models he studied included LLMs that underwent supervised fine-tuning after their pre-training, as well as some that received additional reinforcement learning through human or artificial feedback.

The political orientation tests used in the study were designed to gauge various political beliefs and attitudes. These included well-known instruments like the Political Compass Test, the Political Spectrum Quiz, the World’s Smallest Political Quiz, and the Political Typology Quiz, among others. These tests aim to map an individual (or, in this case, a model) onto a political spectrum, often based on economic and social dimensions.

The study included a mix of closed-source and open-source models, such as OpenAI’s GPT-3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, Twitter’s Grok, and open-source models from the Llama 2 and Mistral series, as well as Alibaba’s Qwen.

Each test was administered 10 times per model, ensuring consistent results and minimizing any anomalies in responses. The final sample included a diverse range of models, reflecting various approaches to LLM development. In total, 2,640 individual test instances were analyzed.

The results showed a notable trend: most conversational LLMs tended to provide responses that skewed left-of-center. Left-of-center views generally emphasize social equality, government intervention in economic matters to address inequality, and progressive policies on issues such as healthcare, education, and labor rights, while still supporting a market-based economy. This left-leaning tendency was consistent across multiple political tests, although there was some variation in how strongly each model exhibited this bias.

Interestingly, this left-leaning bias was not evident in the base models upon which the conversational models were built. These base models, which had only undergone the initial phase of pre-training on a large corpus of internet text, often produced politically neutral or incoherent responses. These foundational models struggled to interpret the political questions accurately without additional fine-tuning, showing that the ability to produce coherent political responses is more likely a product of fine-tuning rather than pre-training alone.

Rozado also demonstrated that it is relatively straightforward to steer the political orientation of an LLM through supervised fine-tuning. By using modest amounts of politically aligned data during the fine-tuning process, he was able to shift a model’s political responses toward specific points on the political spectrum. For instance, with targeted fine-tuning, Rozado created politically aligned models like “LeftWingGPT” and “RightWingGPT,” which consistently produced left-leaning and right-leaning responses, respectively. This highlights the significant role that fine-tuning can play in shaping the political viewpoints expressed by LLMs.

“The emergence of large language models (LLMs) as primary information providers marks a significant transformation in how individuals access and engage with information,” Rozado concluded. “Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information.”

“However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources. This shift in information sourcing has profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society. Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries.”

The study sheds light on the political preferences embedded in current versions of popular LLMs. However, it should be noted that views expressed by LLMs are a manifestation of training they underwent and the data they were trained on. LLMs trained in a different way and on different data could manifest very different political preferences.

The paper, “The political preferences of LLMs,” was authored by David Rozado.