procedure
Techniques such as Reinforcement Learning with Human Feedback (RLHF) (Ouyang et al., 2022) and Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) are used to address model-level limitations regarding hallucinations.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (3)
- hallucination concept
- Retrieval-Augmented Generation (RAG) concept
- Reinforcement learning from human feedback (RLHF) concept