Relations (1)

related 1.58 — strongly supporting 2 facts

Pre-trained language models are prone to hallucinations due to statistical errors in classification as described in [1], and these issues can be addressed by fine-tuning these models using transfer learning strategies as noted in [2].

Facts (2)

Sources
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org The Journal of Nuclear Medicine 1 fact
claimTransfer learning, which involves leveraging publicly pretrained models and fine-tuning them on local data, is an effective strategy for balancing generalization and specialization to mitigate hallucinations.
[2509.04664] Why Language Models Hallucinate - arXiv arxiv.org arXiv 1 fact
claimHallucinations in pretrained language models originate as errors in binary classification, arising through natural statistical pressures when incorrect statements cannot be distinguished from facts.