reference
The paper 'Nullu: Mitigating Object Hallucinations in Large Vision-Language Models via HalluSpace Projection' by Yang et al. (2025) proposes a projection method to mitigate object hallucinations in LVLMs.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (1)
- Large Vision-Language Models concept