claim
Large language models exhibit more hallucinations in long responses than in short ones because the opportunities for error accumulation multiply with sequence length, and the divergence from the true prefix is not bounded by the training objective.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept