reference
The paper 'Image Tokens Matter: Mitigating Hallucination in Discrete Tokenizer-based Large Vision-Language Models via Latent Editing' by Wang et al. (2025) proposes a latent editing method for discrete tokenizer-based LVLMs.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Vision-Language Models concept
- hallucination mitigation concept