claim
Experiments by the authors of "Do LLMs Build World Representations? Probing Through the Lens ..." show that fine-tuning and advanced pre-training strengthen the tendency of Large Language Models to maintain goal-oriented abstractions during decoding, which prioritizes task completion over the recovery of the world's state and dynamics.
Authors
Sources
- Do LLMs Build World Representations? Probing Through the Lens ... proceedings.neurips.cc via serper
Referenced by nodes (1)
- Large Language Models concept