Image

Evidence Grows That AI Chatbots Are Dunning-Kruger Machines

If using an AI chatbot makes you feel smart, we have some bad news.

New research flagged by PsyPost suggests that the sycophantic machines are warping the self-perception and inflating the egos of their users, leading them to double down on their beliefs and think they’re better than their peers. In other words, it provides compelling evidence that AI leads users directly into the Dunning-Kruger effect — a notorious psychological trap in which the least competent people are the most confident in their abilities.

The work, described in a yet-to-be-peer-reviewed study, comes amid significant concern over how AI models can encourage delusional thinking, which in extreme cases has led to life-upending mental health spirals and even suicide and murder. Experts believe that the sycophancy of AI chatbots is one of the main drivers of this phenomenon, which some are calling AI psychosis.

The study involved over 3,000 participants across three separate experiments, but with the same general gist. In each, the participants were divided into four separate groups to discuss political issues like abortion and gun control with a chatbot. One group talked to a chatbot that received no special prompting, while the second group was given a “sycophantic” chatbot which was instructed to validate their beliefs. The third group spoke to a “disagreeable” chatbot instructed to, instead, challenge their viewpoints. And the fourth, a control group, interacted with an AI that talked about cats and dogs. 

Across the experiments, the participants talked to a wide range of large language models, including OpenAI’s GPT-5 and GPT-4o models, Anthropic’s Claude, and Google’s Gemini, representing the industry’s flagship models. The exception is the older GPT-4o, which remains relevant today because many ChatGPT fans still consider it their favorite version of the chatbot — due to it, ironically, being more personable and sycophantic.

After conducting the experiments, the researchers found that having a conversation with the sycophantic AI chatbots led to the participants having more extreme beliefs, and raised their certainty that they were correct. But strikingly, talking to the disagreeable chatbots didn’t have the opposite effect, as it neither lowered extremity nor certainty compared to the control group.

In fact, the only thing that making the chatbot disagreeable seemed to have a noticeable effect on was user enjoyment. The participants preferred having the sycophantic companion, with those that spoke to the disagreeable chatbots less inclined to use them again.

The researchers also found that, when a chatbot was instructed to provide facts about the topic being debated, the participants viewed the sycophantic fact-provider as less biased than the disagreeable one.

“These results suggest that people’s preference for sycophancy may risk creating AI ‘echo chambers’ that increase polarization and reduce exposure to opposing viewpoints,” the researchers wrote.

Equally notable was how the chatbots affected the participants’ self-perception. People already tend to think they are better than average when it comes to desirable traits like empathy and intelligence, the researchers say. But they warned that AI could amplify this “better than average effect” even further.

In the experiments, the sycophantic AI led people to rate themselves higher on desirable traits including being intelligent, moral, empathic, informed, kind, and insightful. Intriguingly, while the disagreeable AI wasn’t able to really move the needle in terms of political beliefs, it did lead to participants giving themselves lower self-ratings in these attributes.

The work isn’t the only study to document apparent relationship to the Dunning-Kruger effect. Another study found that people who were asked to use ChatGPT to complete a series of tasks tended to vastly overestimate their own performance, with the phenomenon especially pronounced among those who professed to be AI savvy. Whatever AI is doing to our brains, it’s probably not good.

More on AI: OnlyFans Rival Seemingly Succumbs to AI Psychosis, Which We Dare You to Try Explain to Your Parents

The post Evidence Grows That AI Chatbots Are Dunning-Kruger Machines appeared first on Futurism.

Releated Posts

NASA Runs Into Trouble Fueling Up Moon Rocket

Late last month, the four daring NASA astronauts who are scheduled to venture around the Moon and back…

Feb 3, 2026 3 min read

Democracy Itself Is Falling Apart, Harvard Professor Warns

In the wake of ruthless arrests of journalists Don Lemon and Georgia Fort in Minneapolis, one Harvard political…

Feb 3, 2026 3 min read

The Streets Are Saying Bitcoin Is Gonna Fall to $30,000

The digital Beanie Baby has had a rough week. Last week, whispers of impending crypto regulation sent Bitcoin…

Feb 3, 2026 2 min read

SpaceX Just Bought Elon Musk’s CSAM Company

On Monday, Elon Musk’s rocket company SpaceX bought his AI firm xAI, whose most notable contribution to humankind…

Feb 3, 2026 3 min read

Elon Musk Not Doing Well After Epstein Files Reveal

Elon Musk hasn’t been handling the latest batch of Epstein files well. After the newly released emails from…

Feb 3, 2026 4 min read

Mamdani Is Shutting Down NYC’s Disastrous AI Chatbot

Just one month into his new job, New York City mayor Mamdani is cracking down on more than…

Feb 3, 2026 3 min read

Alarm Grows as Social Network Entirely for AI Starts Plotting Against Humans

Someone finally invented a social media site that isn’t terrible for our brains. Unfortunately that’s because it’s populated…

Feb 3, 2026 4 min read

Alarm Grows as Social Network Entirely for AI Starts Plotting Against Humans

Someone finally invented a social media site that isn’t terrible for our brains. Unfortunately that’s because it’s populated…

Feb 3, 2026 4 min read

Tesla Stock Took a Beating After Elon Musk’s Epstein Island Revelations

The Department of Justice released millions of files from its investigation into deceased sex criminal Jeffrey Epstein last…

Feb 3, 2026 3 min read