reference
The paper 'Uncertainty-Aware Fusion: An Ensemble Framework for Mitigating Hallucinations in Large Language Models' by Dey et al. (2025) proposes an ensemble framework for hallucination mitigation.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination mitigation concept