claim
Using auxiliary classifiers or LLMs-as-judges to score and post-edit generated content is a promising direction for hallucination mitigation, as identified by Liu et al. (2023).

Authors

Sources

Referenced by nodes (1)