claim
Large language models may learn erroneous facts with higher confidence than a single-source error would warrant because the internet's tendency to copy and redistribute content creates an amplification dynamic where the model perceives duplicated errors as consensus.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept