claim
Clinically oriented Large Language Models (LLMs) produce hallucinations that are exacerbated by the complexity and specificity of medical knowledge, where subtle differences in terminology or reasoning lead to significant misunderstandings.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- hallucination concept