reference
The paper 'Internal Consistency and Self-Feedback in Large Language Models: A Survey' proposes an 'Internal Consistency' framework to enhance reasoning and alleviate hallucinations, which consists of three components: Self-Evaluation, Internal Consistency Signal, and Self-Update.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- LLM-as-a-judge concept