claim
Large Language Models (LLMs) exhibit systematic errors known as medical hallucinations, where the models generate incorrect or misleading medical information that can adversely affect clinical decision-making and patient outcomes.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- medical hallucination concept
- clinical decision-making concept