reference
Hu et al. (2024) characterize the universality, capacity, and efficiency limits of prompt tuning within simplified Transformer settings.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Transformer concept