A recent study published in Social Cognitive and Affective Neuroscience found that people on both sides of the immigration debate tend to endorse factual statements that align with their own beliefs and favor messages from people within their group. Importantly, the study showed that this behavior is reflected in brain activity. By analyzing participants’ brain responses, researchers identified neural patterns that predicted when people would overendorse messages that aligned with their views and downplay those that didn’t.
In today’s highly polarized political environment, people are often quick to embrace information that supports their existing beliefs while dismissing opposing views. This phenomenon, known as motivated reasoning, has far-reaching consequences, from shaping public opinion to fueling misinformation. Understanding the underlying brain processes that drive this behavior could help researchers and policymakers develop strategies to reduce bias and misinformation.
Previous research has proposed different models to explain why people exhibit biased reasoning in political contexts. Some suggest that individuals derive psychological rewards from holding beliefs that align with their social group or ideology. Others propose that the brain’s error detection system, which alerts us when new information contradicts prior beliefs, could play a key role. However, the exact neural mechanisms remain unclear.
“The spread of misinformation and the growing belief polarization often reflect people’s motivation to process information in ways that protect their valuable political identities and affirm their political attitudes even at the defiance of facts,” said study author Giannis Lois, a postdoctoral researcher in the Department of Psychology at the University of Crete.
“For the development of evidence-based policy interventions to curb this problem, a better scientific understanding of the underlying neurocognitive processes of politically motivated reasoning is of paramount importance. However, existing work has not systematically examined the neural basis of politically motivated reasoning as most neuroimaging studies on this topic have failed to distinguish motivated reasoning from rational Bayesian reasoning with biased prior beliefs. More specifically, these studies did not control for essential components of the belief updating process such as participants’ prior beliefs on the topic of interest and their confidence in the different sources of information.”
The study focused on the contentious issue of immigration, specifically how people perceive foreign criminality. After screening 628 potential German participants, the researchers recruited 41 individuals in total, 26 who supported a welcoming migration policy and 15 who favored stricter immigration controls. These two groups represented opposite ends of the immigration debate: one group believed that immigrants do not increase crime rates, while the other believed they do.
Participants were asked to estimate the percentage of crimes committed by foreigners in various German cities and then read factual messages that either confirmed or contradicted their estimates. Importantly, the messages came from members of either their own group (pro- or anti-immigration) or the opposing group. For example, a participant who believed that foreigners are not responsible for high crime rates might see a message from an anti-immigration supporter stating that foreign criminality is higher than they estimated. Participants were then asked to rate how likely it was that the message was correct.
While participants completed this task, researchers used functional Magnetic Resonance Imaging (fMRI) to measure their brain activity. The goal was to observe which brain regions were involved in the decision-making process when participants endorsed or rejected the messages. By comparing participants’ responses to different types of messages (in-group vs. out-group, desirable vs. undesirable), the researchers were able to identify the neural mechanisms driving their biases.
The researchers found that participants were more likely to believe messages that confirmed their views and messages from in-group members. In contrast, they tended to underendorse or reject messages that contradicted their beliefs or came from out-group members. Interestingly, the study found no significant difference between the two groups in terms of the magnitude of these biases, indicating that both sides were equally prone to motivated reasoning.
“We demonstrate that people engage in two distinct forms of motivated reasoning: desirability bias, which reflects a tendency to accept messages aligned with their ideology while rejecting those that are not, and identity bias, which reflects a preference for messages from ingroup members and a discounting of messages from outgroup members,” Lois told PsyPost.
The brain imaging data provided further insight into these biases. The researchers identified several brain regions that were activated when participants processed messages in a biased way. Key areas included the ventromedial prefrontal cortex and the ventral striatum, which are known to be involved in evaluating the value of information and making decisions based on rewards. When participants endorsed messages that aligned with their beliefs, these regions showed increased activity, suggesting that the brain treats belief-confirming information as rewarding.
In addition, regions associated with error detection, such as the dorsal anterior cingulate cortex and the anterior insula, were also activated. These areas help the brain detect and process discrepancies between new information and prior beliefs. The study found that these regions were more active when participants processed messages that contradicted their views, possibly reflecting the brain’s effort to reconcile conflicting information.
Finally, the researchers observed activity in the temporoparietal junction, a region involved in understanding others’ perspectives and social reasoning. This area was particularly active when participants processed messages from in-group members, suggesting that people may engage in more mental effort to understand and accept information from their own social group.
“We show that these biases are supported by different neural mechanisms,” Lois explained. “Brain areas associated with value encoding, error detection, and mentalizing were linked to desirability bias, whereas identity bias involved less extensive activation in the mentalizing network. The similar brain activation patterns observed in two rival political groups suggest that belief polarization occurs because opposing ideological groups rely on shared neurocognitive processes that drive motivated reasoning.”
But as with all research, there are some limitations to consider.
“As with most neuroimaging studies, the relatively small sample size and the focus on a single issue (in this case, immigration) may limit the generalizability of our findings to other populations or socio-political contexts,” Lois noted. “However, it is worth noting that we replicated the behavioral effects observed by Thaler (2024), who used a similar experimental design but addressed a range of political topics. This suggests that people engage in motivated reasoning across various contexts, not just when discussing immigration.”
Looking forward, the researchers plan to further explore the causal role of specific brain regions.
“Our next step in this line of research is to use non-invasive brain stimulation techniques to clarify the causal role of specific cortical regions in politically motivated reasoning,” Lois said. “These studies can offer valuable new insights into the precise neural and cognitive mechanisms that drive motivated thinking in political contexts. Such insights can then enhance existing cognitive models and contribute to the development of strategies to reduce belief polarization and limit the spread of misinformation.”
The study, “Tracking politically motivated reasoning in the brain: the role of mentalizing, value-encoding, and error detection networks,” was authored by Giannis Lois, Elias Tsakas, Kenneth Yuen, and Arno Riedl.