claim
Kim and Suzuki (2024) theoretically showed that for Transformers with both MLP and attention layers, assuming rapid convergence of attention layers, the infinite-dimensional loss landscape for MLP parameters exhibits a benign non-convex structure.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Transformers concept