claim
Self-refining methods involve using an LLM to both critique and refine its own output to improve the robustness of reasoning processes and reduce hallucination.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (1)
- hallucination concept