claim
Large language model hallucinations are especially severe when the model is queried about tail entities or information that falls after the model's training cutoff date.

Authors

Sources

Referenced by nodes (1)