claim
Large language models reliably hallucinate on tail entities because the statistical signal in the training data is too weak to encode accurate information.

Authors

Sources

Referenced by nodes (1)