claim
Fine-tuning and advanced pre-training strengthen the tendency of large language models to maintain goal-oriented abstractions during decoding, which prioritizes task completion over the recovery of the world's state and dynamics.
Authors
Sources
- Do LLMs Build World Representations? Probing Through ... neurips.cc via serper
Referenced by nodes (3)
- Large Language Models concept
- fine-tuning concept
- Pre-training concept