Relations (1)
related 3.17 — strongly supporting 8 facts
Chain-of-thought is a reasoning technique used to mitigate or influence the occurrence of hallucinations in large language models, as evidenced by its role in reducing factual errors [1], [2], [3] and its potential to inadvertently elaborate on them [4], [5].
Facts (8)
Sources
Survey and analysis of hallucinations in large language models frontiersin.org 5 facts
claimChain-of-Thought prompting and Instruction-based inputs are effective for mitigating hallucinations in Large Language Models but are insufficient in isolation.
procedurePrompt tuning approaches, such as Chain-of-Thought prompting (Wei et al., 2022) and Self-Consistency decoding (Wang et al., 2022), aim to reduce hallucinations without altering the underlying model.
claimStructured prompt strategies, such as chain-of-thought (CoT) prompting, significantly reduce hallucinations in prompt-sensitive scenarios, although intrinsic model limitations persist in some cases.
claimChain-of-Thought prompting can backfire by making hallucinations more elaborate if a model fundamentally lacks knowledge on a query, as the model may rationalize a falsehood in detail.
claimIf a hallucinated answer disappears when a question is asked more explicitly or via Chain-of-Thought, the cause is likely prompt-related; if the hallucination persists across all prompt variants, the cause likely lies in the model's internal behavior.
EdinburghNLP/awesome-hallucination-detection - GitHub github.com 1 fact
claimReasoning models using Chain-of-Thought (CoT) hallucinate more than base models on complex factual questions because extended generation provides more surface area for factuality drift.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org 1 fact
claimChain-of-thought reasoning significantly reduced hallucinations in 86.4% of tested comparisons after FDR correction (q < 0.05), demonstrating that explicit reasoning traces enable self-verification and error detection.
Medical Hallucination in Foundation Models and Their ... medrxiv.org 1 fact
claimInference techniques such as Chain-of-Thought (CoT) and Search Augmented Generation can effectively reduce hallucination rates in foundation models, though non-trivial levels of hallucination persist.