reference
The paper 'Direct preference optimization: your language model is secretly a reward model' introduces Direct Preference Optimization as a method for aligning language models.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper