Relations (1)
related 2.58 — strongly supporting 5 facts
The relationship between hallucination and prompt is established by research identifying the prompt as a primary source of model errors, as seen in [1] and [2]. Furthermore, [3], [4], and [5] demonstrate that prompt design and specific modifications directly influence the frequency and nature of hallucinations produced by a model.
Facts (5)
Sources
Survey and analysis of hallucinations in large language models frontiersin.org 4 facts
claimConsistent hallucinations across different models suggest prompt-induced errors, while divergent hallucination patterns imply architecture-specific behaviors or training artifacts.
claimA positive Joint Attribution Score (JAS) indicates that specific prompt-model combinations amplify hallucinations beyond what would be expected from individual prompt or model effects alone, suggesting the prompt and model jointly contribute to the error.
referenceBang and Madotto (2023) developed neural attribution predictors to identify whether a hallucination originates from the prompt or the model.
claimIf a hallucinated answer disappears when a question is asked more explicitly or via Chain-of-Thought, the cause is likely prompt-related; if the hallucination persists across all prompt variants, the cause likely lies in the model's internal behavior.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com 1 fact
claimModifying the prompt from the baseline used in Experiment 1 to include a style update used in Experiment 8 resulted in a reduction of both major and minor omissions, though it caused a slight increase in minor hallucinations.