Relations (1)
Facts (15)
Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org 3 facts
claimFine-tuning large language models on biomedical corpora significantly improves their understanding of clinical text, as demonstrated by Alsentzer et al. (2019).
claimTo ensure clinical relevance, Large Language Models require regular fine-tuning on updated medical data and integration with dynamic knowledge retrieval systems, such as tools capable of real-time evidence synthesis.
claimRobust finetuning procedures and retrieval-augmented generation can improve the balance of training data, which helps mitigate availability bias in large language models.
The construction and refined extraction techniques of knowledge ... nature.com 2 facts
claimThe framework aims to balance lightweight fine-tuning of large language models (LLMs) with multi-task adaptability.
claimLarge-scale pre-trained Large Language Models (LLMs) such as GPT-4 and LLaMA-3 utilize large-scale pretraining and task-specific fine-tuning to achieve cross-task generalization.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org 2 facts
claimInstruction tuning and reinforcement learning from human feedback (RLHF) are proposed methods applied on top of fine-tuning to ensure Large Language Models follow human instructions, align with human values, and exhibit desired behaviors.
procedureThe training process for Large Language Models (LLMs) generally consists of two stages: pre-training and fine-tuning.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org 2 facts
referenceThe paper 'SMT: fine-tuning large language models with sparse matrices' (The Thirteenth International Conference on Learning Representations) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding fine-tuning.
claimThe research paper 'All roads lead to likelihood: the value of reinforcement learning in fine-tuning' (arXiv:2503.01067) analyzes the role and value of reinforcement learning in the fine-tuning process of large language models.
Do LLMs Build World Representations? Probing Through ... neurips.cc 1 fact
claimFine-tuning and advanced pre-training strengthen the tendency of large language models to maintain goal-oriented abstractions during decoding, which prioritizes task completion over the recovery of the world's state and dynamics.
Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com 1 fact
referenceThe paper 'Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?' by Gekhman et al. (2024) examines the relationship between fine-tuning on new knowledge and hallucination rates.
Combining Knowledge Graphs and Large Language Models - arXiv arxiv.org 1 fact
claimFuture research into combining knowledge graphs and large language models may address ineffective knowledge integration by modifying model architecture, fine-tuning, or injecting knowledge into feature-based pre-training models.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com 1 fact
claimFine-tuning large language models (LLMs) with knowledge graphs involves adapting pre-trained LLMs to use structured information from KGs to generate contextually accurate responses.
Survey and analysis of hallucinations in large language models frontiersin.org 1 fact
perspectiveFor developers deploying Large Language Models, selecting models based on attribution patterns (Prompt Sensitivity vs. Model Vulnerability) can inform fine-tuning strategies.
Unlock the Power of Knowledge Graphs and LLMs - TopQuadrant topquadrant.com 1 fact
claimKnowledge graphs improve the accuracy and contextual understanding of large language models and generative AI through retrieval-augmented generation (RAG), prompt-to-query techniques, or fine-tuning.