claim
Current literature on Large Language Models identifies several unpredictable behaviors at scale, including In-Context Learning (Brown et al., 2020), complex hallucinations (Xu et al., 2024b), and 'aha moments' observed during training (Guo et al., 2025).
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- In-Context Learning concept