claim
Large Vision-Language Model (LVLM) hallucinations originate from three interacting causal pathways: image-to-input-text, image-to-output-text, and text-to-text.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (2)
- hallucination concept
- Large Vision-Language Models concept