reference
The paper "FLAME: Factuality-Aware Alignment for Large Language Models" by Lin et al. (2024) introduces a factuality-aware alignment method designed for large language models.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (1)
- Large Language Models concept