claim
Researchers (2025a) analyzed the optimization dynamics of a single-layer Transformer with normalized ReLU self-attention under in-context learning (ICL) mechanisms, finding that smaller eigenvalues of attention weights preserve basic knowledge, while larger eigenvalues capture specialized knowledge.

Authors

Sources

Referenced by nodes (2)