claim
Greedy decoding in Large Language Models, which selects the argmax token at each step, produces locally optimal but globally inconsistent outputs and is prone to repetition or loops.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept