It starts with something small. A sore throat that lingers. A sharp pain you cannot quite explain. Within seconds, you reach for your phone, searching for answers that feel immediate and reassuring.
But the information greeting you at the top of the page may not be as dependable as it appears.
An investigation by The Guardian has raised concerns about Google’s AI-generated summaries, known as AI Overviews.
These automated responses, powered by Google’s Gemini technology, appear above traditional search results and aim to provide quick explanations to health-related questions. However, researchers warn that speed does not always mean precision.
Also read: You don’t need the gym to stay healthy, experts say
According to the report, clear disclaimers stating that AI-generated medical information is for informational purposes only are not always prominently displayed.
In some cases, users must actively click for additional details before seeing notices advising them to consult a healthcare professional.
Artificial intelligence specialists, including researchers affiliated with MIT, caution that even advanced models can produce inaccurate or incomplete medical guidance.
Algorithms rely heavily on the details users provide, and symptoms described online may lack important clinical context.
Also read: Oral hygiene may be linked to a lower risk of more than 50 diseases
A professor of AI at Queen Mary University of London has pointed to structural challenges in these systems, arguing that tools optimized for rapid responses are more vulnerable to factual errors in sensitive areas like healthcare.
Earlier reporting also found examples of misleading health information in AI summaries. Google has since restricted the feature for certain medical searches, though it remains active in other cases.
Sources: Digi24 and The Guardian
Also read: What happens to your body if you eat too much beetroot?
