claim
Large language models reliably hallucinate on tail entities because the statistical signal in the training data is too weak to encode accurate information.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept