claim
Xu et al. (2024b) proved that hallucination is mathematically inevitable for any computable Large Language Model, regardless of the model architecture or training data, due to inherent limitations in computability and learnability.

Authors

Sources

Referenced by nodes (1)