reference
The paper "Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs" by Gu et al. (2025) introduces a method for achieving generalizable and fine-grained factuality alignment in large language models.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (1)
- Large Language Models concept