claim
LLaMA 2 (13B) benefits significantly from Chain-of-Thought (CoT) prompting, though ambiguous instructions can lead to hallucinations.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- chain-of-thought concept