claim
Zhao et al. (2024b) propose a memory-efficient training strategy for Parameter-Efficient Fine-Tuning (PEFT) that performs gradient updates within a projected low-rank subspace.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Parameter-Efficient Fine-Tuning concept