claim
The shift from supervised objectives to preference-based optimization in Large Language Models introduces theoretical questions regarding reward model generalization, policy stability, and the alignment of complex systems.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept