claim
Med-HallMark includes multifaceted hallucination data across three dimensions: ground truth (GT) standards, Large Vision-Language Model (LVLM) outputs for prompts, and fine-grained annotations of LVLM-generated content detailing both the type of hallucination and its correctness.
Authors
Sources
- Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org via serper
Referenced by nodes (1)
- Large Vision-Language Models concept