perspective
The 'Representation Camp' perspective posits that Large Language Models (LLMs) store memories about various topics during pretraining, and in-context learning retrieves contextually relevant topics during inference based on demonstrations.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- In-Context Learning concept
- Pre-training concept