claim
Large language models are prone to hallucinating facts that appear only once in the training data, known as singletons, because the model lacks sufficient data to memorize them correctly.

Authors

Sources

Referenced by nodes (1)