claim
Zhu et al. (2025b) demonstrate that large language models can maintain multiple reasoning trajectories in a state of superposition within continuous latent space, facilitating implicit parallel thinking that exceeds traditional serial reasoning capabilities.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- Superposition concept