claim
Subtle or plausible-sounding misinformation generated by LLMs in healthcare can influence diagnostic reasoning, therapeutic recommendations, or patient counseling, as noted by Miles-Jay et al. (2023), Xia et al. (2024), Mehta and Devarakonda (2018), and Mohammadi et al. (2023).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- health care concept
- misinformation concept