Relations (1)

related 0.10 — supporting 1 fact

Large Language Models and Superposition are related because [1] demonstrates that large language models can maintain multiple reasoning trajectories in a state of superposition within continuous latent space, enabling implicit parallel thinking.

Facts (1)

Sources
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv 1 fact
claimZhu et al. (2025b) demonstrate that large language models can maintain multiple reasoning trajectories in a state of superposition within continuous latent space, facilitating implicit parallel thinking that exceeds traditional serial reasoning capabilities.