claim
Self-refining methods for LLMs rely on prompting at each intermediate reasoning step and the model's own reasoning capabilities to correct itself, which can lead to unreliable performance gains according to Huang et al. (2023) and Li et al. (2024).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept