claim
The temperature parameter in large language models scales the logit distribution before sampling; higher values flatten the distribution and increase hallucination risk, while lower values sharpen the distribution toward the most probable tokens.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept