claim
Wei et al. (2021) posit that pre-training enables models to capture underlying latent variable information within text data.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Pre-training concept