claim
Robust approaches to mitigating large language model hallucinations target multiple causes simultaneously, including using retrieval augmentation for knowledge gaps, better data curation for training data issues, scheduled sampling variants for exposure bias, and calibration training for generation pressure.

Authors

Sources

Referenced by nodes (3)