reference
The paper 'A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions' by Huang et al. (2025) provides a comprehensive survey of hallucination phenomena in large language models, published in ACM Transactions on Information Systems.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
- LLM Hallucination Detection and Mitigation: State of the Art in 2026 zylos.ai via serper
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept