Image

OpenAI Launches ChatGPT Health, Which Ingests Your Entire Medical Records, But Warns Not to Use It for “Diagnosis or Treatment”

AI chatbots may be explosively popular, but they’re known to dispense some seriously wacky— and potentially dangerous — health advice, in a flood of easily accessible misinformation that has alarmed experts.

Their advent has turned countless users into armchair experts, who often end up relying on obsolete, misattributed, or completely made-up advice.

A recent investigation by The Guardian, for instance, found that Google’s AI Overviews, which accompany most search results pages, doled out plenty of inaccurate health information that could lead to grave health risks if followed.

But seemingly unperturbed by experts’ repeated warnings that AI’s health advice shouldn’t be trusted, OpenAI is doubling down by launching a new feature called ChatGPT Health, which will ingest your medical records to generate responses “more relevant and useful to you.”

Yet despite being “designed in close collaboration with physicians” and built on “strong privacy, security, and data controls,” the feature is “designed to support, not replace, medical care.” In fact, it’s shipping with a ludicrously self-defeating caveat: that the bespoke health feature is “not intended for diagnosis or treatment.”

“ChatGPT Health helps people take a more active role in understanding and managing their health and wellness — while supporting, not replacing, care from clinicians,” the company’s website reads.

In reality, users are certain to use it for exactly the type of health advice that OpenAI is warning against in the fine print, which is likely to bring fresh new embarrassments for the company.

It’ll only be heightening existing problems for the company. As Business Insider reports, ChatGPT is “making amateur lawyers and doctors out of everyone,” to the dismay of legal and medical professionals.

Miami-based medical malpractice attorney Jonathan Freidin told the publication that people will use chatbots like ChatGPT to fill out his firm’s client contact sheet.

“We’re seeing a lot more callers who feel like they have a case because ChatGPT or Gemini told them that the doctors or nurses fell below the standard of care in multiple different ways,” he said. “While that may be true, it doesn’t necessarily translate into a viable case.”

Then there’s the fact that users are willing to surrender medical histories, including highly sensitive and personal information — a decision that OpenAI is now encouraging with ChatGPT Health — despite federal law, like HIPAA, not applying to consumer AI products.

Case in point, billionaire Elon Musk encouraged people last year to upload their medical data to his ChatGPT competitor Grok, leading to a flood of confusion as users received hallucinated diagnoses after sharing their X-rays and PET scans.

Given the AI industry’s track record when it comes to privacy protection and struggles with significant data leaks, all these risks are as pertinent as ever.

“New AI health tools offer the promise of empowering patients and promoting better health outcomes, but health data is some of the most sensitive information people can share and it must be protected,” Center for Democracy and Technology senior counsel Andrew Crawford told the BBC.

“Especially as OpenAI moves to explore advertising as a business model, it’s crucial that separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight,” he added. “Since it’s up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time,” Electronic Privacy Information Center senior counsel Sara Geoghegan told The Record.

Then there are concerns over highly sensitive data, like reproductive health information, being passed on to the police against the user’s wishes.

“How does OpenAI handle [law enforcement] requests?” Crawford told The Record. “Do they just turn over the information? Is the user in any way informed?”

“There’s lots of questions there that I still don’t have great answers to,” he added.

More on AI and health advice: Google’s AI Overviews Caught Giving Dangerous “Health” Advice

The post OpenAI Launches ChatGPT Health, Which Ingests Your Entire Medical Records, But Warns Not to Use It for “Diagnosis or Treatment” appeared first on Futurism.

Releated Posts

Madison Square Garden Reportedly Used Facial Recognition to Stalk Trans Woman For Two Years

In most privately-owned venues today, you probably take it for granted that AI-integrated cameras are tracking your every…

Apr 20, 2026 3 min read

The Florida Mass Shooter’s Conversations With ChatGPT Are Worse Than You Could Possibly Imagine

In the months before he committed a grisly mass shooting, Phoenix Ikner obsessively used Open AI’s ChatGPT to…

Apr 19, 2026 3 min read

Scientists Intrigued by Nasal Spray That Reverse Brain Aging in Mice, Say It May Work on Humans as Well

A team of scientists at Texas A&M University say they’ve developed a nasal spray that improves the working…

Apr 19, 2026 3 min read

China Is Starting to Pull Ahead of US in AI Race

Back in 2017, China’s state council laid out the first draft of its long-term AI strategy in a…

Apr 19, 2026 3 min read

Researchers Invented a Fake Disease to Trick AI and the Funniest Possible Thing Happened

In 2024, a team led by University of Gothenburg medical researcher Almira Osmanovic Thunström invented a fake disease…

Apr 19, 2026 3 min read

Hospital Reuses Syringes, Infects Hundreds of Children With HIV

Nightmarish scenes are playing out in one hospital in Taunsa, a city located in the central Punjab province…

Apr 19, 2026 2 min read

Study Finds AI Use Eats Away at Users’ Confidence in Their Own Brains

Researchers are growing increasingly suspicious that outsourcing intellectual tasks to AI is causing a range of cognitive deficits.…

Apr 18, 2026 3 min read

Democrats Warned Not to Upset Multi-Million Dollar AI Lobbyists, Even Though It’d Be a Slam Dunk With Voters

During the 2024 presidential election between Kamala Harris and Donald Trump, the Democratic party establishment made the strange…

Apr 18, 2026 3 min read

You Are Not Prepared to Learn the Size of Neanderthal Infants

Neanderthal babies were apparently bigger — and grew faster — than familiar human tykes. At least, El Pais…

Apr 18, 2026 3 min read