reference
The paper 'Direct preference optimization: your language model is secretly a reward model' introduces Direct Preference Optimization as a method for aligning language models.

Authors

Sources

Referenced by nodes (1)