perspective
Kalai et al. (2025) argue that post-training benchmarks exacerbate hallucinations in Large Language Models by penalizing uncertainty, which incentivizes models to guess rather than abstain from answering.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept