In May 2024, Google threw caution to the wind by rolling out its controversial AI Overviews feature in a purported effort to make information easier to find.
But the AI hallucinations that followed — like telling users to eat rocks and put glue on their pizzas — ended up perfectly illustrated the persistent issues that plague large language model-based tools to this day.
And while not being able to reliably tell what year it is or making up explanations for nonexistent idioms might sound like innocent gaffes that at most lead to user frustration, some advice Google’s AI Overviews feature is offering up could have far more serious consequences
In a new investigation, The Guardian found that the tool’s AI-powered summaries are loaded with inaccurate health information that could put people at risk. Experts warn that it’s only a matter of time until the bad advice endangers users — or, in a worst-case scenario, results in someone’s death.
The issue is severe. For instance, The Guardian found that it advised those with pancreatic cancer to avoid high-fat foods, despite doctors recommending the exact opposite. It also completely bungled information about women’s cancer tests, which could lead to people ignoring real symptoms of the disease.
It’s a precarious situation as those who are vulnerable and suffering often turn to self-diagnosis on the internet for answers.
“People turn to the internet in moments of worry and crisis,” end-of-life charity Marie Curie director of digital Stephanie Parker told The Guardian. “If the information they receive is inaccurate or out of context, it can seriously harm their health.”
Others were alarmed by the feature turning up completely different responses to the same prompts, a well-documented shortcoming of large language model-based tools that can lead to confusion.
Mental health charity Mind’s head of information, Stephen Buckle, told the newspaper that AI Overviews offered “very dangerous advice” about eating disorders and psychosis, summaries that were “incorrect, harmful or could lead people to avoid seeking help.”
A Google spokesperson told The Guardian in a statement that the tech giant invests “significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.”
But given the results of the newspaper’s investigation, the company has a lot of work left to ensure that its AI tool isn’t dispensing dangerous health misinformation.
The risks could continue to grow. According to an April 2025 survey by the University of Pennsylvania’s Annenberg Public Policy Center, nearly eight in ten adults said they’re likely to go online for answers about health symptoms and conditions. Nearly two-thirds of them found AI-generated results to be “somewhat or very reliable,” indicating a considerable — and troubling — level of trust.
At the same time, just under half of respondents said they were uncomfortable with healthcare providers using AI to make decisions about their care.
A separate MIT study found that participants deemed low-accuracy AI-generated responses “valid, trustworthy, and complete/satisfactory” and even “indicated a high tendency to follow the potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the response provided.”
That’s despite AI models continuing to prove themselves as strikingly poor replacements for human medical professionals.
Meanwhile, doctors have the daunting task of dispelling myths and trying to keep patients from being led down the wrong path by a hallucinating AI.
On its website, the Canadian Medical Association calls AI-generated health advice “dangerous,” pointing out that hallucinations, as well as algorithmic biases and outdated facts, can “mislead you and potentially harm your health” if they choose to follow the generated advice.
Experts continue to advise people to consult human doctors and other licensed healthcare professionals instead of AI, a tragically tall ask given the many barriers to adequate care around the world.
At least AI Overviews sometimes appears to be aware of its own shortcomings. When queried if it should be trusted for health advice, the feature happily pointed us to The Guardian‘s investigation.
“A Guardian investigation has found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm,” read the AI Overviews’ reply.
More on AI Overviews: Google’s AI Summaries Are Destroying the Lives of Recipe Developers
The post Google’s AI Overviews Caught Giving Dangerous “Health” Advice appeared first on Futurism.





