claim
Self-Consistency decoding (Wang et al., 2022), ReAct prompting (Yao et al., 2022), and Instruct-tuning (Ouyang et al., 2022) reduce hallucination rates by influencing how a model organizes its internal generation paths, though these methods are heuristic and do not universally prevent hallucinations across all domains or tasks.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- hallucination concept