claim
Large language models lack learned error-correction behavior because they are never trained to recover from their own mistakes, forcing the model to condition all future tokens on any inaccurate token generated early in a sequence.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept