claim
Yu et al. (2023a) demonstrate that Transformer-like deep network layers can be connected to an optimization process aimed at sparse rate reduction.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Transformer concept