back to top
23.8 C
Islamabad
Friday, February 27, 2026

One in four medical diagnoses generated by AI is invented, but many users trust them

One in four medical diagnoses generated by AI is invented, but this does not seem to deter users, who continue to trust this type of tools even for health issues.Despite warnings from numerous medical professionals, the popularity of platforms like ChatGPT for medical consultations continues to grow, reflecting a worrying trend in the relationship between technology and public trust.

According to a recent study conducted by MIT researchers and published in The New England Journal of Medicine, people tend to rate medical answers generated by artificial intelligence as more reliable and complete than those offered by real doctors or specialized digital platforms.

The research included three hundred people with and without medical knowledge, who evaluated three types of responses to clinical queries: one from a health professional, another from a digital platform, and a third from an AI system.

One in four medical diagnoses generated by AI is invented, but many users trust them
One in four medical diagnoses generated by AI is incorrect or fabricated, but this does not seem to deter users.(Illustrative image Infobae)

The result was overwhelming: participants showed a notable preference for AI responses, rating them as the most complete and trustworthy, despite knowing that one in four diagnoses generated by these tools is incorrect.

Even in the face of evidence of frequent errors, many users expressed their willingness to follow the recommendations generated by artificial intelligence.

The study also highlights how difficult it is for users – and even for doctors themselves – to identify when a diagnosis has been made by an AI and when by a human specialist.

The convincing presentation of responses generated by artificial language models makes the difference, in practice, almost imperceptible.This difficulty in distinguishing between both sources increases the risk that errors or inventions go unnoticed and are taken as valid by patients.

One in four medical diagnoses generated by AI is invented, but many users trust them
The study also highlights how difficult it is for users to identify when a diagnosis has been made by an AI.(Illustrative image Infobae)

The ease of access, the immediacy of the answers and the appearance of completeness that artificial intelligence offers lead many users to trust it, even when it comes to sensitive issues such as health.

However, experts warn that this reliance can be dangerous, as AI lacks the clinical training and professional judgment necessary to make sound medical decisions.The study itself quotes the phrase of a specialist: “Artificial intelligence is practicing medicine without having adequate preparation for it.”

In a context where much of the digital content is already generated by AI, the reliability of medical information available on the internet becomes more questionable than ever.Researchers insist on the need to adopt a critical attitude and not leave decisions that can seriously affect people’s health in the hands of algorithms.

One in four medical diagnoses generated by AI is invented, but many users trust them
AI researcher.MICIU COMPANY

The advancement of artificial intelligence (AI) in scientific research has caused a significant increase in the number of articles published, although this growth has been accompanied by a decrease in their perceived quality.

A%20study%20conducted%20by%20the%20Cornell%20University%20reveals%20that%20researchers who%20use%20AI%20generate%20up to%20a%2050%%20m%C3%A1s%20of%20publications%20que%20your%20colleagues%20who%20don’t%20use%20these%20tools,%20un%20fen%C3%B3less%20especially%20marked%20in%20pa%C3%ADs%20where%20the English%C3%A9s%20is%20not%20the%20main%20language.

However, these works have a lower acceptance rate in specialized journals, which suggests that their contribution or scientific relevance could be more limited.Given this panorama, the authors of the study highlight the need to establish new regulations and standards that accompany the dizzying pace at which AI is modifying the academic environment.

Aiman Sohail
Aiman Sohail
Dr. Aiman Sohail is a seasoned journalist and geopolitical analyst with over a decade of experience covering global affairs, politics, and current events. She earned her Bachelor’s degree in International Relations from Quaid-i-Azam University, Islamabad, followed by a Master’s in Political Science from Lahore University of Management Sciences (LUMS). Driven by a passion for understanding global dynamics, she completed her PhD in International Security Studies at The University of London, focusing on South Asian geopolitics and conflict resolution. Sara began her career as a correspondent for The Express Tribune, covering domestic politics and economic developments. She later joined Geo News as a senior reporter, specializing in geopolitical affairs, foreign policy, and conflict analysis. Over the years, her articles have been featured in major national and international publications, including Dawn, The Diplomat, and Al Jazeera English, earning her recognition for insightful analysis and in-depth reporting. In addition to journalism, Sara frequently contributes to academic forums, think tanks, and panel discussions on international relations. Her expertise lies in South Asian security, diplomatic policy, and global political trends, making her one of Pakistan’s leading voices in contemporary geopolitics.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles