measurement
Respondents identified insufficient training data (31 mentions) and biased training data (31 mentions) as the most frequently cited causes of AI hallucinations, followed by limitations in model architecture (30), lack of real-world context (26), overconfidence in AI-generated responses (24), and inadequate transparency of AI decision-making (14).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (2)
- training data concept
- AI hallucinations concept