reference
The MHaluBench benchmark is a meta-evaluation dataset that encompasses various hallucination categories and multimodal tasks for Multimodal Large Language Models (MLLMs).
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (1)
- Multimodal Large Language Models concept