claim
Nearly all models show prompt-induced hallucinations close to or exceeding the number of catastrophic hallucinations when presented with counterfactual questions, indicating that Large Vision-Language Models (LVLMs) are highly vulnerable to such attacks.
Authors
Sources
- Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org via serper
Referenced by nodes (2)
- Large Vision-Language Models concept
- prompt-induced hallucination concept