claim
Zhong et al. (2025a) introduced the Reinforced Token Optimization (RTO) framework, proving that modeling Reinforcement Learning from Human Feedback (RLHF) as a token-wise Markov Decision Process (MDP) is significantly more sample-efficient than the traditional contextual bandit formulation.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper