reference
The paper 'HaDeMiF: Hallucination Detection and Mitigation in Large Language Models' by Zhou et al. (2025) addresses both detection and mitigation of hallucinations in LLMs.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination detection concept
- hallucination mitigation concept