claim
A fundamental theoretical problem in Large Language Model alignment is determining whether it is possible to mathematically guarantee that a model will not exhibit harmful behaviors, or if such guarantees are impossible due to the inherent probabilistic nature of Large Language Models.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept