claim
In long-form generation, large language models tend to cascade early factual errors because the model continues to build on incorrect premises rather than reversing course, as the model's training does not incentivize self-correction.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept