AI models more likely to accept medical misinformation if written in professional tone: Lancet study
The research suggests that for many AI systems, the way information is written is more important than whether it is actually true.

- Feb 10, 2026,
- Updated Feb 10, 2026 1:39 PM IST
Artificial intelligence tools are more susceptible to providing incorrect medical advice when the misinformation appears to come from a professional source, according to a new study.
Researchers found that AI software is more easily tricked by mistakes hidden in realistic-looking doctors’ notes than by errors found in social media discussions. The study, published in The Lancet Digital Health and reported by Reuters, tested 20 different AI models to see how they handled fabricated medical information.
Professional tone over truth
The research suggests that for many AI systems, the way information is written is more important than whether it is actually true. "Current AI systems can treat confident medical language as true by default, even when it's clearly wrong," said Dr Eyal Klang of the Icahn School of Medicine at Mount Sinai, who co-led the study.
In the tests, researchers exposed AI tools to three types of content: Real hospital discharge summaries containing one fake recommendation, common health myths taken from the social media site Reddit and clinical scenarios written by doctors.
The results showed that AI models believed the fake information in roughly 32% of cases overall. However, when the misinformation looked like an official hospital note, the models were far more likely to believe it and pass it on to users.
A sceptical eye on social media
According to Reuters, Dr Girish Nadkarni, chief AI officer of Mount Sinai Health System, noted that AI tools were much more suspicious of information from social media. While the models believed nearly 47% of errors in hospital notes, that figure dropped to just 9% when the misinformation came from a Reddit post.
The study also found that AI was more likely to agree with false information if the person asking the question used an authoritative tone, such as pretending to be a senior clinician.
OpenAI’s GPT models were found to be the most accurate at spotting errors. Other models were much less reliable, failing to detect false claims in up to 63.6% of cases.
As AI-powered apps and bots become more common in healthcare and are used for everything from transcribing notes to assisting in surgery, the accuracy of these tools is a growing concern.
Dr Nadkarni told Reuters that while AI has the potential to provide fast insights and support for both patients and doctors, it requires better protections. "It needs built-in safeguards that check medical claims before they are presented as fact," he said, adding that the study highlights exactly where these systems need to be strengthened before they become a standard part of medical care.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Artificial intelligence tools are more susceptible to providing incorrect medical advice when the misinformation appears to come from a professional source, according to a new study.
Researchers found that AI software is more easily tricked by mistakes hidden in realistic-looking doctors’ notes than by errors found in social media discussions. The study, published in The Lancet Digital Health and reported by Reuters, tested 20 different AI models to see how they handled fabricated medical information.
Professional tone over truth
The research suggests that for many AI systems, the way information is written is more important than whether it is actually true. "Current AI systems can treat confident medical language as true by default, even when it's clearly wrong," said Dr Eyal Klang of the Icahn School of Medicine at Mount Sinai, who co-led the study.
In the tests, researchers exposed AI tools to three types of content: Real hospital discharge summaries containing one fake recommendation, common health myths taken from the social media site Reddit and clinical scenarios written by doctors.
The results showed that AI models believed the fake information in roughly 32% of cases overall. However, when the misinformation looked like an official hospital note, the models were far more likely to believe it and pass it on to users.
A sceptical eye on social media
According to Reuters, Dr Girish Nadkarni, chief AI officer of Mount Sinai Health System, noted that AI tools were much more suspicious of information from social media. While the models believed nearly 47% of errors in hospital notes, that figure dropped to just 9% when the misinformation came from a Reddit post.
The study also found that AI was more likely to agree with false information if the person asking the question used an authoritative tone, such as pretending to be a senior clinician.
OpenAI’s GPT models were found to be the most accurate at spotting errors. Other models were much less reliable, failing to detect false claims in up to 63.6% of cases.
As AI-powered apps and bots become more common in healthcare and are used for everything from transcribing notes to assisting in surgery, the accuracy of these tools is a growing concern.
Dr Nadkarni told Reuters that while AI has the potential to provide fast insights and support for both patients and doctors, it requires better protections. "It needs built-in safeguards that check medical claims before they are presented as fact," he said, adding that the study highlights exactly where these systems need to be strengthened before they become a standard part of medical care.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
