Image

Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It’s Totally Wrong

In a matter of only a few years, AI chatbots have become a common part of many of our daily lives, even though they remain deeply flawed systems.

The reality is that chatbots like OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude still make regular mistakes. According to an October study by the BBC, even the most advanced AI chatbots gave wrong answers a whopping 45 percent of the time.

But many users don’t understand that reality. As detailed in a new paper, University of Pennsylvania postdoctoral researcher Steven Shaw and marketing professor Gideon Nave found that in a series of experiments, users tended to take the output of ChatGPT at face value even when it gave them the incorrect answer.

Across a series of experiments, participants were asked to answer a variety of reasoning and knowledge-based questions. Despite making the use of ChatGPT optional, over 50 percent of them chose to use the chatbot to answer the questions.

The researchers were testing a key theory: whether users would be willing to believe what the AI was telling them regardless of accuracy, in what they termed a “cognitive surrender” that effectively overrode their intuition and deliberation process.

In the most striking experiment, involving 359 participants, participants followed AI’s correct advice 92.7 percent of the time — and a still-considerable 79.8 percent of the time when the AI gave them the wrong answer.

“While override rates were substantially higher on AI-Faulty than AI-Accurate trials, participants followed faulty AI recommendations on roughly four out of five chat-engaged trials,” the researchers wrote.

The research points at a much broader change in how we perceive the world around us and how we’re letting AI influence how we make decisions.

“We felt that the ability to actually outsource thinking hadn’t really been studied itself. It’s sort of a profound idea,” Shaw said during a UPenn podcast appearance last month. “A bit provocative, I would say, in the paper, that with these AI tools that are available, they’re so ingrained in our daily lives and decision processes that we now have the option or ability to outsource thinking itself.”

The results suggest that users are willing to give up their own agency when AI presents them with false-but-plausible directions.

“We saw that even when cognitive surrender is engaged, people adopt those answers and are more confident in those answers,” Shaw explained during the podcast episode.

The experiments also suggest we could be losing our ability to critically engage with information, something previous research has found as well.

“The capacity to think critically, the capacity to be able to check what the AI is giving you has become more and more important over time,” Nave said. “This is kind of a muscle that we have, that hopefully we are not going to lose over time.”

“Right now, we are constrained by communicating with LLMs through our phones or our computers,” Shaw added. “As those barriers reduce, that integration is just going to become stronger.”

Eventually, we could continue giving up our agency, further cementing our reliance on AI.

“Everybody thinks that this point will come from AI getting better and better,” Nave said. “But there is an alternative story here, of humans becoming more and more reliant on AI. Just like we now have an air conditioner that can set our temperature easily, and we can move from one place to another without using any physical activity.”

“Just like many of us have lost something because of this cultural or technological evolution, we may lose as a species something very critical to our existence,” he added, “which is our capacity to think.”

More on AI and thinking: Harvard Professor Says AI Users Are Losing Cognitive Abilities

The post Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It’s Totally Wrong appeared first on Futurism.

Releated Posts

New York’s Beloved Bodegas Are Filling Up With AI Slop

Not even the most enduring symbol of New York city is safe from the AI onslaught. No, we’re…

Mar 28, 2026 3 min read

Elon Musk’s Orbital Data Centers Are Staggeringly Huge

Elon Musk’s promises tend to follow a formula. They involve a number ending in “-illion,” pertaining to something…

Mar 28, 2026 3 min read

Google Warns That Quantum Armageddon Is Drawing Closer

On Wednesday, Google said it was moving up the date it needs to prepare for when quantum computers…

Mar 28, 2026 4 min read

Netizens Terrified of What NASA Grew on the Space Station: A Potato

Extraterrestrial horrors are growing on the International Space Station — or so aghast netizens seemed to believe. Over…

Mar 28, 2026 3 min read