claim
Wen et al. (2024) theoretically showed that introducing a single Transformer layer into an RNN is sufficient to enhance its in-context retrieval capability and close the representation gap with Transformers.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Recurrent Neural Network concept