claim
Large language model hallucinations are driven by the interaction of four causes: training data issues (noisy web data), knowledge gaps (questions about tail entities), completion pressure (generating confident-sounding responses), and exposure bias (early errors compounding in long-form answers).
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (2)
- hallucination concept
- exposure bias concept