claim
The 'Alignment Impossibility' theorems suggest that removing specific behaviors from large language models without compromising their general capabilities may be fundamentally unachievable.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept