claim
Finetuning large language models modifies the model's response style regarding expressed confidence, but the underlying knowledge gaps and exposure bias patterns remain encoded in the base model from pretraining.

Authors

Sources

Referenced by nodes (4)