claim
The MedHallu benchmark defines hallucination in large language models as instances where a model produces information that is plausible but factually incorrect.
Authors
Sources
- [Literature Review] MedHallu: A Comprehensive Benchmark for ... www.themoonlight.io via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination concept
- MedHallu concept