claim
Exposure bias in large language models does not require the model to lack the correct answer; rather, hallucinations arise because an error changes the input distribution, activating incorrect associations despite the model potentially possessing reliable knowledge.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination concept
- exposure bias concept