reference
The paper 'Why can gpt learn in-context? language models secretly perform gradient descent as meta optimizers' is an arXiv preprint (arXiv:2212.10559).

Authors

Sources

Referenced by nodes (3)