procedure
To reduce LLM hallucinations, the proposed scoring rule for model evaluation is: correct answers receive +1 point, wrong answers receive a penalty of t / (1 - t), and saying 'I don't know' receives 0 points, where t is the confidence threshold.
Authors
Sources
- What Really Causes Hallucinations in LLMs? - AI Exploration Journey aiexpjourney.substack.com via serper
Referenced by nodes (1)
- LLM hallucinations in medicine concept