claim
Li et al. (2025a) provided a convergence analysis demonstrating how gradient descent optimization enables non-linear Transformers to learn Chain-of-Thought (CoT) reasoning, while quantifying the sample complexity required to maintain robustness against noisy context examples.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- chain-of-thought concept
- gradient descent concept