claim
Yao et al. (2025c) provide a unified framework for selecting appropriate weight types and learning rates, offering theoretical guidance for the general fine-tuning of attention-based models.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- fine-tuning concept