reference
Yang et al. (2023b) incorporated a looping paradigm directly into the Transformer’s iterative computation process, enabling the model to more effectively learn tasks that require internal learning algorithms.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Transformer concept