reference
The MHaluBench benchmark is a meta-evaluation dataset that encompasses various hallucination categories and multimodal tasks for Multimodal Large Language Models (MLLMs).

Authors

Sources

Referenced by nodes (1)