One in four medical diagnoses generated by AI is invented, but this does not seem to deter users, who continue to trust this type of tools even for health issues.Despite warnings from numerous medical professionals, the popularity of platforms like ChatGPT for medical consultations continues to grow, reflecting a worrying trend in the relationship between technology and public trust.
According to a recent study conducted by MIT researchers and published in The New England Journal of Medicine, people tend to rate medical answers generated by artificial intelligence as more reliable and complete than those offered by real doctors or specialized digital platforms.
The research included three hundred people with and without medical knowledge, who evaluated three types of responses to clinical queries: one from a health professional, another from a digital platform, and a third from an AI system.
The result was overwhelming: participants showed a notable preference for AI responses, rating them as the most complete and trustworthy, despite knowing that one in four diagnoses generated by these tools is incorrect.
Even in the face of evidence of frequent errors, many users expressed their willingness to follow the recommendations generated by artificial intelligence.
The study also highlights how difficult it is for users – and even for doctors themselves – to identify when a diagnosis has been made by an AI and when by a human specialist.
The convincing presentation of responses generated by artificial language models makes the difference, in practice, almost imperceptible.This difficulty in distinguishing between both sources increases the risk that errors or inventions go unnoticed and are taken as valid by patients.
The ease of access, the immediacy of the answers and the appearance of completeness that artificial intelligence offers lead many users to trust it, even when it comes to sensitive issues such as health.
However, experts warn that this reliance can be dangerous, as AI lacks the clinical training and professional judgment necessary to make sound medical decisions.The study itself quotes the phrase of a specialist: “Artificial intelligence is practicing medicine without having adequate preparation for it.”
In a context where much of the digital content is already generated by AI, the reliability of medical information available on the internet becomes more questionable than ever.Researchers insist on the need to adopt a critical attitude and not leave decisions that can seriously affect people’s health in the hands of algorithms.
The advancement of artificial intelligence (AI) in scientific research has caused a significant increase in the number of articles published, although this growth has been accompanied by a decrease in their perceived quality.
A%20study%20conducted%20by%20the%20Cornell%20University%20reveals%20that%20researchers who%20use%20AI%20generate%20up to%20a%2050%%20m%C3%A1s%20of%20publications%20que%20your%20colleagues%20who%20don’t%20use%20these%20tools,%20un%20fen%C3%B3less%20especially%20marked%20in%20pa%C3%ADs%20where%20the English%C3%A9s%20is%20not%20the%20main%20language.
However, these works have a lower acceptance rate in specialized journals, which suggests that their contribution or scientific relevance could be more limited.Given this panorama, the authors of the study highlight the need to establish new regulations and standards that accompany the dizzying pace at which AI is modifying the academic environment.

