reference
The paper 'TRUE: Re-evaluating Factual Consistency Evaluation' by Honovich et al. (2022) proposes the TRUE benchmark for re-evaluating factual consistency in language models, published as an arXiv preprint.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (1)
- factual consistency evaluation concept