claim
Large language models developed explicitly for medical purposes remain vulnerable to domain-specific hallucinations, which often arise from reasoning failures rather than mere knowledge gaps.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept