claim
Jelassi et al. (2024) demonstrated that Transformers can copy sequences of exponential length, whereas fixed-state models are fundamentally limited by their finite memory.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Transformers concept