claim
Li et al. (2026) formalized Reinforcement Learning from Human Feedback (RLHF) through the framework of algorithmic stability and built a generalization theory under the linear reward model.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper