claim
Chain-of-Thought (CoT) prompting (Wei et al., 2022) improves reasoning transparency and factual correctness in large language models by encouraging step-wise output generation.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (3)
- Large Language Models concept
- chain-of-thought concept
- factual correctness concept