claim
Larger models tend to hallucinate with 'confident nonsense', and model scaling alone does not eliminate hallucination but can amplify it in certain contexts, according to Kadavath et al. (2022).
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- hallucination concept