claim
Large Vision-Language Models (LVLMs) show insignificant differences in Attribute Hallucinations and have similar error boundaries, indicating difficulty in correctly judging or describing the size, shape, or number of organs and pathologies.
Authors
Sources
- Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org via serper
Referenced by nodes (1)
- Large Vision-Language Models concept