claim
Medical Large Language Model (LLM) hallucinations are the product of learned statistical correlations in training data, coupled with architectural constraints such as limited causal reasoning, as identified by Jiang et al. (2023) and Glicksberg (2024).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (2)
- hallucination concept
- training data concept