claim
Chain-of-Thought prompting can backfire by making hallucinations more elaborate if a model fundamentally lacks knowledge on a query, as the model may rationalize a falsehood in detail.

Authors

Sources

Referenced by nodes (2)