reference
The paper 'Doremi: optimizing data mixtures speeds up language model pretraining' was published in Advances in Neural Information Processing Systems 36, pages 69798–69818.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper