claim
Large language model hallucinations are especially severe when the model is queried about tail entities or information that falls after the model's training cutoff date.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper