Image

Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advice

While Google’s AI may no longer recommend eating rocks or confidently telling users to put glue on their pizza, even cutting-edge AI chatbots remain staggeringly incompetent at dispensing medical advice.

In a new study published this week in the journal JAMA Network Open, researchers asked 21 frontier large language models (LLMs) to “play doctor” when confronted with realistic symptoms that an actual patient could feasibly ask about.

The results painted a damning picture. The AIs’ failure rates exceeded 80 percent when provided with given ambiguous symptoms that could match more than one condition, and for more straightforward cases that included including physical exam findings and lab results, they still failed 40 percent of the time. The researchers also found that unlike human clinicians, the “LLMs collapse prematurely onto single answers,” resulting in “weak performance” across all models.

“Despite continued improvements, off-the-shelf large language models are not ready for unsupervised clinical-grade deployment,” said corresponding author and Massachusetts General Hospital associate chair of innovation and commercialization Marc Succi in a statement. “Differential diagnoses are central to clinical reasoning and underlie the ‘art of medicine’ that AI cannot currently replicate,” he added.

Translated into the real world, an AI that leaps to conclusions when not represented with the full picture could have devastating consequences. Say, if a person were to ask a chatbot about a rash or a sudden onset cough, they may be presented with misleading information and potentially dangerous advice.

The results highlight the considerable risks of relying on AI for live-or-die health advice, a worrying trend that’s already playing out across the country. As a recent survey by the West Health-Gallup Center on Healthcare in America found, one in four American adults — the equivalent of 66 million people — are already asking ChatGPT and other chatbots like it for medical advice.

Respondents often said they were seeking information both before and after seeing a healthcare professional. In many cases, they’re foregoing seeking real-world medical assistance entirely after talking to a chatbot. Among those who asked AI for health advice, 14 percent — the equivalent of over nine million Americans — said they never saw a provider they would’ve otherwise seen if it weren’t for the tech.

According to the survey, 27 percent said they didn’t want to pay for a doctor’s visit as a reason for consulting AI, while 14 percent said they were unable to pay for one. Some participants said they didn’t have time or ability to visit a doctor.

“Artificial intelligence is already reshaping how Americans seek health information, make decisions and engage with providers, and health systems must keep pace,” said West Health Policy Center president Tim Lash in a statement.

Taken together, the two studies paint a damning picture of the current healthcare landscape in the US. Not only are millions of Americans heavily relying on AI tools, they’re frequently being presented with flawed advice by hallucinating LLMs — and choosing not to seek help from far more knowledgeable professionals.

AI have already caught a large amount of flak from experts for doling out bad medical advice, from Google’s AI Overviews giving dangerously inaccurate or out of context information to transcription tools used by doctors inventing nonexistent medications.

Even if the information they’re giving is wrong, AI is giving patients a sense of certainty. Almost half of respondents in the latest survey said that talking to a chatbot about medical problems had made them feel more confident when talking to a provider, 22 percent said it helped them identify issues earlier, and 19 percent said it allowed them to avoid unnecessary tests or procedures.

At the same time, many Americans remain highly skeptical of AI’s medical advice. Roughly a third of participants who said they consulted AI for health issues said they distrusted the tool. One in ten respondents said the AI gave them potentially unsafe advice.

One thing’s for sure: the AI industry is in dire need of regulatory oversight.

More on AI and medical advice: Frontier AI Models Are Doing Something Absolutely Bizarre When Asked to Diagnose Medical X-Rays

The post Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advice appeared first on Futurism.

Releated Posts

Man at City Council Meeting Makes Devastating Case Against Proposed Local Data Center

The growing anger over the AI industry’s obsession with building massive and resource-intensive data centers across the country…

Apr 18, 2026 4 min read

Robot Dogs Patrolling Precious Crops as Food Crisis Deepens

Robot dogs are increasingly finding real life uses as guardians of sensitive sites like AI data centers, the…

Apr 18, 2026 3 min read

There Are Signs of a Massive AI Backlash

The public outrage over the tech industry’s obsession with AI is starting to boil over — and the…

Apr 17, 2026 3 min read

A Prominent PR Firm Is Running a Fake News Site That’s Plagiarizing Original Journalism at Incredible Scale

On Tuesday evening, we published an original interview with a researcher who had recently coauthored an intriguing study…

Apr 17, 2026 9 min read