claim
Xiong et al. (2024) addressed the lack of exploration in offline Direct Preference Optimization (DPO) by formulating the problem as a reverse-KL regularized bandit and proposing iterative algorithms that outperform static baselines.

Authors

Sources

Referenced by nodes (1)