claim
Hallucination in large language models is a structural issue originating from how training data is collected, how the optimization objective is constructed, the limitations of what knowledge the model can represent, and how the generation process converts probability distributions into words.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination concept
- training data concept