claim
The LLM self-check method effectively catches mistakes in output, but it has a tendency to hallucinate falsehoods even within correct responses, leading to valid outputs being incorrectly flagged as 'false'.

Authors

Sources

Referenced by nodes (1)