claim
Large Language Models possess an internal understanding of question unanswerability in closed-book settings, even though they tend to hallucinate contextual answers rather than admitting they cannot answer.

Authors

Sources

Referenced by nodes (1)