claim
Hallucination in Large Vision Language Models (LVLMs) is defined as the generation of descriptions that are inconsistent with relevant images and user instructions, containing incorrect objects, attributes, and relationships related to the visual input.
Authors
Sources
- Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org via serper
Referenced by nodes (2)
- hallucination concept
- Large Vision-Language Models concept