reference
Chen et al. (2023) published 'Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models' in CIKM, focusing on discerning reliable answers.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (1)
- hallucination detection concept