claim
Fine-tuning and advanced pre-training strengthen the tendency of large language models to maintain goal-oriented abstractions during decoding, which prioritizes task completion over the recovery of the world's state and dynamics.

Authors

Sources

Referenced by nodes (3)