reference
Kim et al. (2025b) formalize prompting as varying an external program under a fixed Transformer executor, define the prompt-induced hypothesis class, and provide a constructive decomposition that separates routing via attention, local arithmetic via feed-forward layers, and depth-wise composition.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- attention concept
- Transformer concept