claim
Retrieval-Augmented Generation (RAG) (Lewis et al., 2020), Grounded pretraining (Zhang et al., 2023), and contrastive decoding techniques (Li et al., 2022) have been explored to counter hallucinations by integrating external knowledge sources during inference or introducing architectural changes that enforce factuality.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- hallucination concept
- Retrieval-Augmented Generation (RAG) concept