claim
The tendency of hallucinated responses to be longer reflects two mechanisms: models attempting to maintain coherence while generating incorrect information, and a 'snowball effect' where initial errors cascade into further mistakes, increasing verbosity.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept