procedure
The authors evaluated the effectiveness of hallucination mitigation techniques on Large Language Models using the Med-HALT benchmark by sampling 50 examples from each of seven medical reasoning tasks, totaling 350 cases.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination mitigation concept
- Med-HALT concept