claim
Using auxiliary classifiers or LLMs-as-judges to score and post-edit generated content is a promising direction for hallucination mitigation, as identified by Liu et al. (2023).
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- LLM-as-a-judge concept