claim
Researchers (2025a) analyzed the optimization dynamics of a single-layer Transformer with normalized ReLU self-attention under in-context learning (ICL) mechanisms, finding that smaller eigenvalues of attention weights preserve basic knowledge, while larger eigenvalues capture specialized knowledge.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- In-Context Learning concept
- Transformer concept