claim
The four major categories of root causes for large language model hallucinations are training data issues, exposure bias during learning, structural knowledge gaps, and generation pressure at inference time.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (3)
- training data concept
- large language model hallucination concept
- exposure bias concept