claim
Large language models learn a prior in favor of confident assertion because their training data, which includes academic papers, news articles, and forum responses, predominantly contains confident, fluent, and authoritative prose.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept