reference
The paper 'Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?' by Gekhman et al. (2024) examines the relationship between fine-tuning on new knowledge and hallucination rates.
Authors
Sources
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination concept
- fine-tuning concept