claim
Chain-of-Thought prompting and Instruction-based inputs are effective for mitigating hallucinations in Large Language Models but are insufficient in isolation.

Authors

Sources

Referenced by nodes (3)