claim
Large language models trained on a curated mixture of data from multiple sources, such as web text, books, code, and scientific articles, consistently outperform models trained on monolithic corpora, according to Liu et al. (2025g).
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept