reference
The paper 'Are transformers with one layer self-attention using low-rank weight matrices universal approximators?' is an arXiv preprint (arXiv:2307.14023) cited in section 3.2.1 of 'A Survey on the Theory and Mechanism of Large Language Models'.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- Transformers concept
- self-attention mechanism concept