claim
Large language models do not learn new tasks during in-context learning (ICL); instead, they use demonstration information to locate tasks or topics, while the ability to perform tasks is learned during pretraining.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- In-Context Learning concept
- Pre-training concept