reference
The paper "UALIGN: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models" by Xue et al. (2025) introduces a method for aligning large language models with factuality using uncertainty estimations.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- uncertainty estimation concept