claim
Low-Rank Adaptation (LoRA), introduced by Hu et al. (2022), has become a dominant Parameter-Efficient Fine-Tuning (PEFT) strategy.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Parameter-Efficient Fine-Tuning concept