claim
Large Language Models possess an internal understanding of question unanswerability in closed-book settings, even though they tend to hallucinate contextual answers rather than admitting they cannot answer.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (1)
- Large Language Models concept