claim
The integration of large language models into healthcare introduces risks to patient care, including the potential for hallucinated outputs to influence therapeutic choices, diagnostic pathways, and patient-provider communication, as noted by Topol (2019), Mehta and Devarakonda (2018), and Hata et al. (2022).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- health care concept