claim
The authors of 'Detecting and Evaluating Medical Hallucinations in Large Vision Language Models' propose a novel benchmark, evaluation metrics, and a detection model specifically designed for the medical domain to address hallucination detection and evaluation challenges in Large Vision Language Models (LVLMs).
Authors
Sources
- Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org via serper