claim
Diep et al. (2025) establish a theoretical link between the “zero-initialized attention” mechanism and Mixture-of-Experts (MoE), proving that this initialization strategy improves sample efficiency compared to random initialization, with non-linear prompts outperforming linear ones.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Mixture of Experts (MoE) concept