claim
Hallucination scores for language models change little across prompting techniques such as Zero-shot, Few-shot, CoT, and Instruction formats because the prompts are semantically equivalent and decoding is low-entropy, causing outputs to be dominated by the models' learned alignment policies.

Authors

Sources

Referenced by nodes (2)