claim
Large language models reliably learn facts about entities once those entities appear above 500 times in the training data, as the hallucination rate curve flattens significantly at this frequency.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept