procedure
Confident decoding mitigates LLM hallucinations by adjusting the decoding process to avoid low-probability outputs, which are more likely to be hallucinated.

Authors

Sources

Referenced by nodes (1)