claim
Reward hacking, defined as a model exploiting flaws in its reward model, is a persistent theoretical concern in the development of large language models.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept