reference
The paper "V-DPO: Mitigating Hallucination in Large Vision Language Models via Vision-Guided Direct Preference Optimization" by Yang et al. (2024) presents a method to mitigate hallucinations in large vision-language models using vision-guided direct preference optimization.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Vision-Language Models concept
- hallucination mitigation concept