{"id":11973,"date":"2026-05-06T20:19:18","date_gmt":"2026-05-07T01:19:18","guid":{"rendered":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/?p=11973"},"modified":"2026-05-06T20:19:22","modified_gmt":"2026-05-07T01:19:22","slug":"large-language-models-and-misinformation","status":"publish","type":"post","link":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/2026\/05\/06\/large-language-models-and-misinformation\/","title":{"rendered":"Large language models and misinformation"},"content":{"rendered":"<div class=\"panel body-content\"><div class=\"panel__container\">\n<p><a href=\"https:\/\/www.thelancet.com\/journals\/landig\/article\/PIIS2589-7500(25)00157-8\/fulltext\">The Lancet<\/a> The barrage of&nbsp;<a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11522648\/\" target=\"_blank\" aria-label=\"misinformation in the field of health care, opens in a new window\" rel=\"noreferrer noopener\">misinformation in the field of health care<\/a>&nbsp;is persistent and growing. The advent of artificial intelligence (AI) and large language models (LLMs) in health care has expedited the increase in misinformation, and LLMs are susceptible to false output if they are trained on incorrect health-care information. This risk of misinformation is especially true for LLMs trained on vast datasets of information originating from online sources and can be particularly difficult to navigate when developers do not disclose the databases used to train such tools. Incorrect medical advice generated from LLMs have serious consequences for patients. How can we quantify and ultimately reduce the misinformation caused by LLMs to ensure better patient health outcomes?<\/p>\n\n\n\n<p>This month in&nbsp;<em>The Lancet Digital Health<\/em>,&nbsp;<a href=\"https:\/\/doi.org\/10.1016\/j.landig.2025.100949\" target=\"_blank\" aria-label=\"Mahmud Omar and colleagues, opens in a new window\" rel=\"noreferrer noopener\">Mahmud Omar and colleagues<\/a>&nbsp;present a benchmark study testing the susceptibility of general-purpose LLMs, as well as LLMs specifically trained for medical use, to medical misinformation embedded in prompts which are the inputs users provide to LLMs as instructions. 20 LLMs were evaluated using 3\u00b74 million prompts from a collection of hospital discharge notes, simulated clinical vignettes, and social media posts, all containing fabricated medical information. Performance in two tasks, detecting misinformation in a recommendation and identifying a logical fallacy (a flaw in the LLM\u2019s reasoning process), varied by model. Interestingly, the popular general-purpose GPT-4o model was both the least susceptible and most accurate at fallacy detection; furthermore, the medical fined-tuned tools performed consistently worse than the general tools. This study shows that LLMs are vulnerable to misinformation, particularly when it is conveyed in an authoritative tone. The study also represents the first large-scale, structured benchmarking exercise that assesses how LLMs manage prompts containing medical misinformation. The study\u2019s strengths lie in testing a wide range of models, including general-purpose and medical tools, as well as both open-source and proprietary models. However, it is important to acknowledge limitations such as the text-only format involved, which does not reflect the multimodal real-world and fabricated medical data that could be fed into LLMs. Furthermore, the downstream effects on clinical impact, such as health outcomes or user trust in the tools, were not investigated.<\/p>\n<\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>The Lancet The barrage of&nbsp;misinformation in the field of health care&nbsp;is persistent and growing. The advent of artificial intelligence (AI) and large language models (LLMs) in health care has expedited the increase in misinformation, and LLMs are susceptible to false output if they are trained on incorrect health-care information. This risk of misinformation is especially [&hellip;]<\/p>\n","protected":false},"author":11,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[17],"tags":[],"class_list":["post-11973","post","type-post","status-publish","format-standard","hentry","category-misinformation-disinformation-and-conspiracy-theories"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/posts\/11973","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/comments?post=11973"}],"version-history":[{"count":1,"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/posts\/11973\/revisions"}],"predecessor-version":[{"id":11974,"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/posts\/11973\/revisions\/11974"}],"wp:attachment":[{"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/media?parent=11973"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/categories?post=11973"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.unmc.edu\/healthsecurity\/transmission\/wp-json\/wp\/v2\/tags?post=11973"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}