claim
Hallucination in Large Vision Language Models (LVLMs) is defined as the generation of descriptions that are inconsistent with relevant images and user instructions, containing incorrect objects, attributes, and relationships related to the visual input.

Authors

Sources

Referenced by nodes (2)