claim
Hallucinated responses in Large Language Models tend to be consistently longer and show greater length variance than non-hallucinated responses.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept