claim
Large language model hallucinations are driven by the interaction of four causes: training data issues (noisy web data), knowledge gaps (questions about tail entities), completion pressure (generating confident-sounding responses), and exposure bias (early errors compounding in long-form answers).

Authors

Sources

Referenced by nodes (2)