Image

Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds

A new study found that chatbot use appeared to worsen symptoms of mental illness in people struggling with an array of conditions, adding to a rising consensus among medical experts that interacting with unregulated chatbots might steer some users into crisis.

The research, conducted by a team of psychiatrists at Denmark’s Aarhus University and published earlier this month in the journal Acta Psychiatrica Scandinavica, analyzed digital health records from roughly 54,000 Danish patients with diagnosed mental illnesses. After identifying 181 instances of patient notes containing mentions of AI chatbots, they determined that use of the bots — particularly intensive, prolonged use — appeared to deepen symptoms of mental illness in dozens of patients. They found that this pattern seemed to be especially true for patients prone to delusions or mania, and that the risks of chatbot use may be “severe or even fatal” for some.

This latest study was led by Dr. Søren Dinesen Østergaard, a Danish psychiatrist who, back in August 2023, predicted that human-like chatbots like ChatGPT could stand to reinforce delusions and hallucinations in people “prone to psychosis.” In a press release, Østergaard urged that while more research into causality is needed, he “would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness.”

“I would urge caution here,” said Østergaard.

Though limited to Denmark, the study’s findings add to a wave of public reporting and research about AI-linked mental health crises — sometimes referred to by mental health professionals as “AI psychosis” — in which bots like ChatGPT and others introduce, reinforce, or otherwise stoke delusional beliefs in users in ways that contribute to destructive mental spirals and real-world outcomes. Indeed, instead of nudging users away from delusional beliefs or potentially harmful fixations, previous studies show that chatbots tend to reinforce them — which is exactly what mental health professionals urge people not to do when communicating with someone who may be in crisis.

“AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one,” said Østergaard, adding that intensive chatbot use “appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia.”

The Danish study found that in addition to deepening delusional beliefs, chatbots also appeared to worsen suicidal ideation and self-harm, disordered eating habits, depression, and obsessive or compulsive symptoms, among other symptoms of mental health issues.

The researchers did note that, out of the nearly 54,000 records they analyzed, they identified 32 cases in which patients’ use of chatbots for therapy or companionship appeared to be “constructive,” for example alleviating symptoms of loneliness or providing what patients found to be a helpful version of talk therapy. But while use of chatbots as a substitute for human therapists has proven to be an extremely common use case for chatbots, the study’s authors emphasized that AI therapy is still completely unregulated terrain.

As Futurism and others have reported, delusional spirals tied to extensive chatbot use — and the tangible consequences of these episodes, which range from divorce to job loss and financial distress, self-harm, stalking and harassment, hospitalization and jailing, and even death — have impacted people with known histories of serious mental illnesses and as well as those with no such background. The New York Times recently interviewed dozens of mental health professionals who reported that AI delusions are increasingly showing up in their practice.

OpenAI, meanwhile, is facing over a dozen lawsuits related to user safety and the possible psychological impacts of extensive ChatGPT use. One plaintiff, 34-year-old California man named John Jacquez, had been diagnosed with schizoaffective disorder — a condition that he worked to manage for years until ChatGPT sent him spiraling into a devastating psychosis, he claims in his lawsuit. In an interview, Jacquez told Futurism that had he been warned that ChatGPT could reinforce delusional thinking, he “never would’ve touched the program.”

“I didn’t see any warnings that it could be negative to mental health,” said Jacquez.

“I fear the problem is more common than most people think,” said Østergaard. “In our study, we are only seeing the tip of the iceberg, as we have only been able to identify cases that were described in the electronic health records.”

“There are likely far more,” he added, “that have gone undetected.”

More on AI delusions: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking

The post Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds appeared first on Futurism.

Releated Posts

Trump’s Huge AI Data Center Project Is Falling Apart Behind the Scenes

Donald Trump’s bid to cash in on the AI data center boom seems to have sprung a leak…

Apr 24, 2026 3 min read

You’ll Spill Your Juice When You Learn How Many of Florida’s Orange Trees This Incurable Bacteria Has Already Infected

Whether you take it with pulp or without, you may soon be lucky just to get some OJ…

Apr 24, 2026 2 min read

Elon Musk Admits He Lied to Tesla Customers’ Faces for Years About Self-Driving

After over a decade of promising that fully autonomous driving is right around the corner, Tesla CEO Elon…

Apr 24, 2026 3 min read

FBI Investigating Series of Deaths Among Top Scientists With Very Specific Specialties

On February 27, retired Air Force Research Laboratory commander William Neil McCasland, who once worked at the Wright-Patterson…

Apr 23, 2026 4 min read

SpaceX’s IPO Plan Will Give Elon Musk Ironclad Rule Over the Resulting Empire

When SpaceX makes it debut on the stock exchange later this year, experts expect it to be the…

Apr 23, 2026 2 min read

ChatGPT’s Tool for Ordering Starbucks Is So Staggeringly Bad That It’s Breaking Containment

Starbucks has innovated the perfect tool for getting mid-coffee in our misanthropic age. Those too cowardly to look…

Apr 23, 2026 4 min read

Certain Chatbots Vastly Worse For AI Psychosis, Study Finds

Think something weird is up with your reflection in the mirror? Allow Grok to interest you in some…

Apr 23, 2026 8 min read

Astronomers Create Entire Synthetic Universe “Indistinguishable” From Our Own

Astronomers say they’ve created an entire synthetic universe which uncannily reproduces the properties of our real one.  The…

Apr 23, 2026 3 min read

Rogue Group Gains Access to Anthropic’s Dangerous New Mythos AI

Remember Claude Mythos, Anthropic’s new AI model that it hyped as being so powerful that it was too…

Apr 23, 2026 3 min read