claim
Robust approaches to mitigating large language model hallucinations target multiple causes simultaneously, including using retrieval augmentation for knowledge gaps, better data curation for training data issues, scheduled sampling variants for exposure bias, and calibration training for generation pressure.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (3)
- large language model hallucination concept
- exposure bias concept
- knowledge gaps concept