claim
Gan and Liu (2025) propose a 'reverse-bottleneck' framework, which posits that a post-trained model’s generalization error upper bound is negatively correlated with the 'information gain' obtained from the generative model.

Authors

Sources

Referenced by nodes (1)