claim
Svenstrup et al. (2015) observe that Large Language Models often lack exposure to rare diseases during training, which leads to hallucinations when the models generate diagnostic insights.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept