claim
Xu et al. (2024b) proved that hallucination is mathematically inevitable for any computable Large Language Model, regardless of the model architecture or training data, due to inherent limitations in computability and learnability.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- hallucination concept