claim
Large Language Models can exhibit reasoning inconsistency, where there is a mismatch between the reasoning process and the derived solution, such as producing a correct answer after an improper reasoning path or an incorrect answer after a legitimate reasoning path.

Authors

Sources

Referenced by nodes (1)