Relations (1)
related 2.81 — strongly supporting 6 facts
Fine-tuning is a primary method investigated for its potential to either mitigate or inadvertently induce hallucinations in language models, as evidenced by studies on new knowledge integration [1], the impact of specific training examples [2], and the use of retrieval-augmented context {fact:1, fact:5}. Furthermore, fine-tuning strategies are explicitly evaluated for their ability to improve reasoning and reduce hallucination rates through structured data alignment [3] and transfer learning [4].
Facts (6)
Sources
Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com 2 facts
referenceThe paper 'Unfamiliar Finetuning Examples Control How Language Models Hallucinate' by Kang et al. (2024) investigates the impact of finetuning examples on hallucination behavior.
referenceThe paper 'Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?' by Gekhman et al. (2024) examines the relationship between fine-tuning on new knowledge and hallucination rates.
KG-RAG: Bridging the Gap Between Knowledge and Creativity - arXiv arxiv.org 1 fact
claimRetrieval-Augmented Generation (RAG) can alleviate hallucinations and outperforms traditional fine-tuning methods for applications requiring high accuracy and up-to-date information by integrating external knowledge more effectively.
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org 1 fact
claimTransfer learning, which involves leveraging publicly pretrained models and fine-tuning them on local data, is an effective strategy for balancing generalization and specialization to mitigate hallucinations.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com 1 fact
claimFine-tuning an LLM on embedded graph data aligns the model's general language understanding with the structured knowledge from the KG, which improves contextual features, increases reasoning capabilities, and reduces hallucinations.
Survey and analysis of hallucinations in large language models frontiersin.org 1 fact
referenceLi et al. (2022) proposed fine-tuning methods that incorporate retrieved factual context to reduce hallucinations.