claim
Self-Consistency decoding (Wang et al., 2022), ReAct prompting (Yao et al., 2022), and Instruct-tuning (Ouyang et al., 2022) reduce hallucination rates by influencing how a model organizes its internal generation paths, though these methods are heuristic and do not universally prevent hallucinations across all domains or tasks.

Authors

Sources

Referenced by nodes (1)