claim
OpenAI introduced the "weak-to-strong generalization" (W2SG) paradigm (Burns et al., 2024), which demonstrates that strong pre-trained language models fine-tuned using supervision signals from weaker models consistently surpass the performance of their weak supervisors.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- OpenAI entity
- pre-trained language models concept