concept

Parameter-Efficient Fine-Tuning

Also known as: Parameter-efficient fine-tuning methods, PEFT, parameter-efficient finetuning

Facts (16)

Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv Mar 12, 2026 7 facts
claimPetrov et al. (2024) prove that Parameter-Efficient Fine-Tuning (PEFT) methods are less expressive than full fine-tuning.
claimParameter-Efficient Fine-Tuning (PEFT) methods aim to optimize a small subset of model parameters to instill new, specific knowledge while preserving or enhancing the model's foundational knowledge.
claimLow-Rank Adaptation (LoRA), introduced by Hu et al. (2022), has become a dominant Parameter-Efficient Fine-Tuning (PEFT) strategy.
claimThe research paper 'ADePT: adaptive decomposed prompt tuning for parameter-efficient fine-tuning' introduces a method called Adaptive Decomposed Prompt Tuning for parameter-efficient fine-tuning of models.
referenceThe paper 'LoRA+ efficient low rank adaptation of large models' (Proceedings of the 41st International Conference on Machine Learning) is cited in the survey 'A Survey on the Theory and Mechanism of Large Language Models' regarding parameter-efficient fine-tuning.
claimZhao et al. (2024b) propose a memory-efficient training strategy for Parameter-Efficient Fine-Tuning (PEFT) that performs gradient updates within a projected low-rank subspace.
claimParameter-Efficient Fine-Tuning (PEFT) methods, as researched by He et al. (2021) and Raffel et al. (2020b), aim to adapt models by optimizing only a small subset of parameters to reduce computational burden.
The construction and refined extraction techniques of knowledge ... nature.com Nature Feb 10, 2026 3 facts
referenceThe BitFit method is a parameter-efficient fine-tuning approach for transformer-based masked language models, published in the AACL 2022 proceedings.
claimParameter-Efficient Fine-Tuning (PEFT) addresses computational challenges by updating only a small fraction of a model's parameters.
referenceThe LoRA (Low-rank adaptation) method is a technique for parameter-efficient fine-tuning of large language models, published in the ICLR 2021 proceedings.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 3 facts
referenceTian et al. (2024) introduced 'KG-Adapter', a method for enabling knowledge graph integration in large language models through parameter-efficient fine-tuning.
referenceKG-Adapter (Tian et al., 2024) improves parameter-efficient fine-tuning of large language models by introducing a knowledge adaptation layer.
referenceLuo et al. (2024b) published 'KnowLA: Enhancing parameter-efficient finetuning with knowledgeable adaptation' in NAACL, pages 7146–7159, which introduces a method for parameter-efficient finetuning using knowledge.
Enterprise AI Requires the Fusion of LLM and Knowledge Graph stardog.com Stardog Dec 4, 2024 2 facts
claimStardog is developing automated Parameter-Efficient Fine-Tuning (PEFT) for customer data, including data accessed via Stardog's federated Virtual Graph (VG) capability, by utilizing customer ontologies as inputs.
claimUsing domain-specific ontologies as Parameter-Efficient Fine-Tuning (PEFT) input for Large Language Models improves accuracy and reduces the frequency of hallucinations.
The Synergy of Symbolic and Connectionist AI in LLM ... arxiv.org arXiv 1 fact
referenceParameter-efficient fine-tuning methods for large-scale pre-trained language models were reviewed by Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, et al. in the 2023 Nature Machine Intelligence article 'Parameter-efficient fine-tuning of large-scale pre-trained language models'.