claim
Min et al. (2022a) found that replacing labels in input-label pairs with random ones during in-context learning inference results in only marginal decreases in performance across 12 models, including GPT-3, which contrasts with findings by Xie et al. (2021).
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- In-Context Learning concept
- GPT-3 concept