claim
Medical Large Language Model (LLM) hallucinations are the product of learned statistical correlations in training data, coupled with architectural constraints such as limited causal reasoning, as identified by Jiang et al. (2023) and Glicksberg (2024).

Authors

Sources

Referenced by nodes (2)