claim
Language models persist in hallucinating because they are optimized to be good test-takers, and guessing when uncertain improves performance on most current evaluation benchmarks.
Authors
Sources
- [2509.04664] Why Language Models Hallucinate - arXiv arxiv.org via serper
Referenced by nodes (2)
- hallucination concept
- Language Model concept