reference
The paper 'The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination' by Zhang et al. (2025) explores the phenomenon of knowledge overshadowing in relation to LLM hallucinations.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept