claim
Recent research in Reinforcement Learning (RL) for model alignment focuses on dissecting the mechanisms of how RL alters model behavior, comparing optimization landscapes of different algorithms, and understanding the risks of reward hacking.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- reinforcement learning concept