Image

OpenAI Says It Will Let Users Add Trusted Contacts to Alert If They Experience a Mental Health Crisis While Using ChatGPT

As it fights a growing stack of user safety and wrongful death lawsuits, OpenAI says it will introduce a “trusted contact feature” in ChatGPT that will alert a chatbot user’s designated loved one in the event of a possible mental health crisis.

OpenAI announced the new feature last week in a blog post, billed as an “update on our mental health-related work.” It said it’s “working closely” with its Council on Well-Being and AI and Global Physicians Network — two internally-regulated groups of experts that were launched after reports of AI-tied mental health crises began to emerge, as well as news of a high-profile lawsuit last August revealing the death by suicide of a 16-year-old ChatGPT user named Adam Raine — to roll out the feature, which it’s marketing as an adult-focused endeavor distinct from its efforts to integrate parental controls and other systems designed to identify and protect minors.

The announcement comes after extensive public reporting — in addition to at least thirteen separate consumer safety lawsuits — about OpenAI customers being pulled into delusional or suicidal spirals with ChatGPT following extensive, often deeply intimate use of the chatbot.

The company doesn’t offer much detail about the feature in the post, simply saying it will “allow adult users to designate someone to receive notifications when they may need additional support.” It has yet to define any reporting standards around what might actually compel the system to flag a person’s use, though, which will be a tricky policy question. Would someone need to explicitly declare intent to hurt or kill themselves, or possibly someone else, for their loved one to be notified? Or would the feature be designed to track and flag less-explicit signs that a user could be in a heightened state of crisis — for example, signs that they could be manic, expressing delusional beliefs, or experiencing psychosis?

It’s likely that we’ll learn more as OpenAI gears up to roll out the feature, and we could see it being especially helpful for users with a diagnosed mental illness who know that intensive AI use could stand to intersect in destructive ways with their mental health. Futurism has reported on several cases of ChatGPT users who successfully managed a mental illness for several years before falling into a ChatGPT-tied crisis. In multiple cases we’ve reviewed, in addition to reinforcing scientific or spiritual delusions, ChatGPT has encouraged users with a mental illness not to continue taking their prescribed medication, agreed that users were somehow misdiagnosed by human professionals, or driven wedges between users and their real-world support system. One ChatGPT user now suing OpenAI, a 34-year-old schizoaffective man named John Jacquez, told us that had he known ChatGPT could reinforce delusions, he “never would’ve touched” the product.

That said, OpenAI still doesn’t warn new ChatGPT users that extensive use could negatively impact their mental health — which, sure, is still being studied and litigated, though there is a growing consensus among experts, both anecdotally and in studies, that chatbots can likely exacerbate existing mental health conditions or worsen nascent crises. Millions of people manage mental illness every day; with the “trusted contact feature,” it would be up to the user to even be aware that chatbots could pose some level of risk to their mental health, and then also want a loved one to be notified of any concerning use patterns.

That “want” is important. A huge number of people lean on AI for emotional support and advice. This is due in part to AI’s low cost and accessibility when compared to oft-inaccessible human therapy — but also, in many cases, because it may feel easier or safer for someone to share sensitive or revealing thoughts with a non-human bot.

In other words, some users could be discussing mental health troubles, or perhaps sharing delusional or dangerous ideas, with ChatGPT expressly because they don’t want to share those thoughts or ideas with another person — a reality that both AI companies and regulators looking at these issues will need to contend with. And to that end, if OpenAI’s internal monitoring tools signal that someone may be in crisis, but that user hasn’t opted to list a trusted contact, what does the company do with that kind of information?

Delusional and suicidal AI spirals haven’t only impacted users with a diagnosed history of serious mental illness, according to reporting by Futurism and the New York Times, which could also impact how many people opt to utilize this kind of feature. Though in its blog post, OpenAI claimed that it’s “continuing to advance how our models detect and respond to signs of emotional distress,” which in addition to the notification tool, includes “new evaluation methods that simulate extended mental health-related conversations” that the company says will help it “better identify potential risks and improve how ChatGPT responds in sensitive moments.”

OpenAI says it hosts 900 million ChatGPT users every week. By its own estimates, as of October, there are millions of weekly ChatGPT users showing signs of suicidality, psychosis, and other crises. While the efficacy of this kind of notification feature remains to be seen, it does feel like a positive step — though the company’s efforts to mitigate the risks its products may pose to its users continue to feel reactive, not proactive.

More on AI and mental health: Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds

The post OpenAI Says It Will Let Users Add Trusted Contacts to Alert If They Experience a Mental Health Crisis While Using ChatGPT appeared first on Futurism.

Releated Posts

Waymos Are a Huge Drain on Public Resources, Government Data Shows

In their 2022 book “The Road to Nowhere,” tech critic Paris Marx considered the implications of the death…

Mar 11, 2026 4 min read

FCC Deciding Whether to Allow Startup to Launch Huge Mirror Satellite to Blast Sunlight on Cities at Nighttime

The US Federal Communication Commission is reviewing an application to launch and deploy a massive mirror satellite in…

Mar 10, 2026 4 min read

Body Horror Robot Turns Human Into Centaur

We’ve seen plenty of robot appendages designed to decrease exertion, from futuristic exoskeletons that can allow you to…

Mar 10, 2026 4 min read

Insiders Afraid the Government Will Nationalize the AI Industry

Depending on who you ask, AI was the financial growth story of 2025. In the first nine months…

Mar 10, 2026 3 min read

YouTube Filling With Horrifying AI Slop for Children

In an age when more and more young children are hooked on digital devices, YouTube is bombarding them…

Mar 10, 2026 5 min read

Top OpenAI Executive Quits in Protest

A top OpenAI executive has quit the company over its agreement with the Department of Defense that allows…

Mar 9, 2026 4 min read

Looks Like the Economy Is Hitting a “Nightmare Scenario”

Tankers in the Strait of Hormuz have ground to a halt, cutting off much of the world from…

Mar 9, 2026 3 min read

Xiaomi Now Using Humanoid Robots to Assemble Electric Cars

There’s a new “intern” on the assembly line, and it won’t be pausing every hour to take a…

Mar 9, 2026 3 min read

The Supreme Court Just Dealt a Crushing Blow to “AI Artists”

Proponents of generative AI say the tech has greatly lowered the barriers of entry in the art world,…

Mar 9, 2026 3 min read