extrinsic hallucination
Also known as: Extrinsic hallucinations, extrinsic hallucinations
Facts (11)
Sources
EdinburghNLP/awesome-hallucination-detection - GitHub github.com 5 facts
claimThe Survey of Hallucination in Natural Language Generation defines extrinsic hallucination as a case where the generated output cannot be verified from the source content, and intrinsic hallucination as a case where the generated output contradicts the source content.
claimNeural Path Hunter defines extrinsic hallucination as an utterance that brings a new span of text that does not correspond to a valid triple in a knowledge graph, and intrinsic hallucination as an utterance that misuses either the subject or object in a knowledge graph triple such that there is no direct path between the two entities.
claimThe ETF framework identifies fabricated entities as extrinsic hallucinations and incorrect entity attributions as intrinsic hallucinations.
claimA large-scale human study of hallucinations in extreme summarization using XSum (BBC articles) found that extrinsic hallucinations are frequent, even in gold summaries, and that textual entailment correlates best with human faithfulness and factuality compared to ROUGE, BERTScore, or QA-based metrics.
claimHalluLens introduces a taxonomy that separates intrinsic hallucinations from extrinsic hallucinations and provides a benchmark with dynamically generated extrinsic tasks to reduce data leakage.
Survey and analysis of hallucinations in large language models frontiersin.org Sep 29, 2025 2 facts
claimExtrinsic hallucinations in large language models appear in open-ended question-answering or narrative-generation tasks where the model outputs plausible-sounding but ungrounded details that are not present in the source text.
claimGPT-4 significantly outperformed LLaMA 2 and DeepSeek in hallucination robustness, while DeepSeek provided moderate improvements over LLaMA 2, particularly in extrinsic hallucinations.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Nov 4, 2024 1 fact
claimLarge Language Models (LLMs) sometimes generate information that conflicts with existing sources (intrinsic hallucination) or cannot be verified (extrinsic hallucination).
LLM Hallucination Detection and Mitigation: State of the Art in 2026 zylos.ai Jan 27, 2026 1 fact
claimExtrinsic hallucinations introduce information not present in the ground truth or source material, splitting into extrinsic factual hallucinations (aligning with general knowledge but not in source) and extrinsic non-factual variants.
awesome-hallucination-detection/README.md at main - GitHub github.com 1 fact
claimThe research work referenced in the EdinburghNLP/awesome-hallucination-detection repository introduces a taxonomy that separates intrinsic hallucinations from extrinsic hallucinations.
Pascale Fung's Post - LLM Hallucination Benchmark linkedin.com 11 months ago 1 fact
claimNeocortix is developing an LLM Hallucination Mitigation system that focuses specifically on extrinsic hallucinations and correct refusal.