claim
Nearly all models show prompt-induced hallucinations close to or exceeding the number of catastrophic hallucinations when presented with counterfactual questions, indicating that Large Vision-Language Models (LVLMs) are highly vulnerable to such attacks.

Authors

Sources

Referenced by nodes (2)