claim
Kim and Suzuki (2024) theoretically showed that for Transformers with both MLP and attention layers, assuming rapid convergence of attention layers, the infinite-dimensional loss landscape for MLP parameters exhibits a benign non-convex structure.

Authors

Sources

Referenced by nodes (1)