claim
In long-form generation, large language models tend to cascade early factual errors because the model continues to build on incorrect premises rather than reversing course, as the model's training does not incentivize self-correction.

Authors

Sources

Referenced by nodes (1)