claim
Researchers have attempted to reduce hallucinations in Large Language Models using prompting techniques including chain-of-thought prompting, self-consistency decoding, retrieval-augmented generation, and verification-based refinement.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (3)
- Large Language Models concept
- Retrieval-Augmented Generation (RAG) concept
- chain-of-thought concept