reference
Cohen et al. (2023) published 'LM vs LM: Detecting Factual Errors via Cross Examination' in Arxiv, proposing a method for detecting factual errors in language models via cross-examination.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (1)
- hallucination detection concept