Google is facing serious questions over the reliability of its AI-generated health summaries after an investigation by The Guardian uncovered several cases of misleading and even potentially dangerous medical advice.
The AI feature in question, called “AI Overview,” sits at the top of Google’s search results and offers quick, AI-generated summaries of complex health topics. The idea is to give users an easy-to-understand, authoritative response. But, in several worrying examples, the summaries provided were simply incorrect.
Some of the errors reported included bad dietary advice for pancreatic cancer patients, confusing interpretations of liver blood test results, and false claims about women’s cancer screenings. Health experts warn that these types of mistakes could lead people to ignore important symptoms, delay seeking treatment, or adopt unsafe habits.
Medical charities and professionals also called out the lack of proper context in these AI summaries, pointing out that people got different answers to the same question, making the summaries even less reliable. Sophie Randall, director of the UK Patient Information Forum, said these findings “demonstrate that Google’s AI Overview can pose health risks by placing inaccurate health information at the top of online searches.”
Pancreatic Cancer UK’s Anna Jewell shared similar concerns, explaining that following the AI’s dietary recommendations could prevent patients from eating enough, which is crucial for those undergoing difficult treatments or surgeries.
In response, Google insisted that most of its AI Overviews are “accurate and helpful.” The company says it is working on making these health summaries better and reminds users that summaries link to reliable sources and that people should always check with medical professionals for final advice.
This investigation highlights the growing pains of using AI for health information online and stresses the need for human oversight—to make sure tech doesn’t end up doing more harm than good.

