claim
Gan and Liu (2025) propose a 'reverse-bottleneck' framework, which posits that a post-trained model’s generalization error upper bound is negatively correlated with the 'information gain' obtained from the generative model.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- generative models concept