claim
Zheng et al. (2024) demonstrated that autoregressively trained Transformers can implement in-context learning by learning a meta-optimizer, specifically learning to perform one-step gradient descent to solve ordinary least squares (OLS) problems under specific initial data distribution conditions.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (3)
- Transformers concept
- In-Context Learning concept
- gradient descent concept