claim
In clinical settings, hallucinations in large language models can undermine the reliability of AI-generated medical information, potentially affecting patient outcomes through influence on diagnostic reasoning, therapeutic recommendations, or patient counseling.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept