claim
For low-knowledge topics like post-cutoff news or obscure figures, large language models maintain a confidence level of 65-75% despite knowledge availability dropping to 10-30%, which creates conditions for confident hallucination.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept