claim
Fan et al. (2025) found that models optimized for reasoning tend to fall into redundant loops of self-doubt and hallucination when faced with unsolvable problems due to missing premises.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- hallucination concept