claim
Self-refining methods, which use a model to critique and refine its own output, aim to improve the robustness of Large Language Model (LLM) reasoning processes to reduce hallucination, as described by Madaan et al. (2024), Dhuliawala et al. (2023), and Ji et al. (2023).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept