claim
Extrinsic hallucinations in large language models appear in open-ended question-answering or narrative-generation tasks where the model outputs plausible-sounding but ungrounded details that are not present in the source text.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- Large Language Models concept
- extrinsic hallucination concept