measurement
Adding a 'not sure' response option to Large Language Models improves hallucination detection precision by up to 38% in the MedHallu benchmark.
Authors
Sources
- MedHallu - GitHub github.com via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination detection concept
- MedHallu concept