claim
Chain-of-Thought prompting and Instruction-based inputs are effective for mitigating hallucinations in Large Language Models but are insufficient in isolation.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination concept
- chain-of-thought concept