claim
Researchers (2024) identified a functional bifurcation within Transformer layers, where lower layers transform representations from pre-training priors to context-aware embeddings, while middle-to-higher layers act as answer writers that causally integrate information from previously generated Chain-of-Thought (CoT) steps.

Authors

Sources

Referenced by nodes (1)