claim
If a hallucinated answer disappears when a question is asked more explicitly or via Chain-of-Thought, the cause is likely prompt-related; if the hallucination persists across all prompt variants, the cause likely lies in the model's internal behavior.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (3)
- hallucination concept
- chain-of-thought concept
- prompt concept