reference
The paper 'Transformers implement functional gradient descent to learn non-linear functions in context' is an arXiv preprint, identified as arXiv:2312.06528.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (3)
- Transformers concept
- In-Context Learning concept
- ArXiv concept