reference
The paper 'Can looped transformers learn to implement multi-step gradient descent for in-context learning?' is an arXiv preprint, identified as arXiv:2410.08292.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- In-Context Learning concept
- gradient descent concept