claim
Wen et al. (2024) theoretically showed that introducing a single Transformer layer into an RNN is sufficient to enhance its in-context retrieval capability and close the representation gap with Transformers.

Authors

Sources

Referenced by nodes (1)