claim
The algorithm f-PO (f-divergence Preference Optimization) minimizes f-divergences between an optimized policy and an optimal policy to align language models with human preferences.

Authors

Sources

Referenced by nodes (1)