claim
The repetition_penalty parameter penalizes tokens that appeared earlier in the sequence, which can prevent repetitive loops but may also discourage large language models from correctly reusing technical terms.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept