claim
Self-refining methods, which use a model to critique and refine its own output, aim to improve the robustness of Large Language Model (LLM) reasoning processes to reduce hallucination, as described by Madaan et al. (2024), Dhuliawala et al. (2023), and Ji et al. (2023).

Authors

Sources

Referenced by nodes (1)