claim
Dai et al. (2022) assert that Transformers implicitly fine-tune during in-context learning inference, building upon the dual form of the attention mechanism originally proposed by Aiserman et al. (1964) and Irie et al. (2022).

Authors

Sources

Referenced by nodes (2)