claim
Dohmatob et al. (2024) provide a theoretical framework explaining that the inclusion of synthetic, AI-generated data in the training corpus can alter or break traditional scaling laws, potentially leading to performance degradation and model collapse.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Synthetic data concept