Relations (1)

related 2.32 — strongly supporting 4 facts

Parameter-Efficient Fine-Tuning is a methodology specifically designed to adapt Large Language Models, as evidenced by techniques like LoRA [1] and KG-Adapter {fact:2, fact:3}. Furthermore, applying these fine-tuning methods to Large Language Models using domain-specific ontologies is a recognized strategy for enhancing model performance and reducing hallucinations [2].

Facts (4)

Sources
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv 2 facts
referenceTian et al. (2024) introduced 'KG-Adapter', a method for enabling knowledge graph integration in large language models through parameter-efficient fine-tuning.
referenceKG-Adapter (Tian et al., 2024) improves parameter-efficient fine-tuning of large language models by introducing a knowledge adaptation layer.
Enterprise AI Requires the Fusion of LLM and Knowledge Graph stardog.com Stardog 1 fact
claimUsing domain-specific ontologies as Parameter-Efficient Fine-Tuning (PEFT) input for Large Language Models improves accuracy and reduces the frequency of hallucinations.
The construction and refined extraction techniques of knowledge ... nature.com Nature 1 fact
referenceThe LoRA (Low-rank adaptation) method is a technique for parameter-efficient fine-tuning of large language models, published in the ICLR 2021 proceedings.