claim
Xiong et al. (2024) addressed the lack of exploration in offline Direct Preference Optimization (DPO) by formulating the problem as a reverse-KL regularized bandit and proposing iterative algorithms that outperform static baselines.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper