reference
The paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models" by Ye et al. (2023) reviews the phenomenon of hallucinations in large language models.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept