claim
Diep et al. (2025) establish a theoretical link between the “zero-initialized attention” mechanism and Mixture-of-Experts (MoE), proving that this initialization strategy improves sample efficiency compared to random initialization, with non-linear prompts outperforming linear ones.

Authors

Sources

Referenced by nodes (1)