claim
Harder-to-detect hallucinations are semantically closer to the ground truth, which causes large language models to struggle more with identifying subtly incorrect information.
Authors
Sources
- [Literature Review] MedHallu: A Comprehensive Benchmark for ... www.themoonlight.io via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept