claim
Self-refining methods involve using an LLM to both critique and refine its own output to improve the robustness of reasoning processes and reduce hallucination.

Authors

Sources

Referenced by nodes (1)