claim
Response length is proposed as a simple yet effective heuristic for detecting hallucinations in Large Language Models, though the authors note it may fail to account for nuanced cases where longer responses are factually accurate.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination detection concept