Relations (1)

related 2.00 — strongly supporting 3 facts

Data quality is identified as a primary causal factor for hallucinations in LLMs [1], where inaccuracies in training data directly propagate errors [2]. Consequently, improving data quality through systematic cleaning is a key strategy for mitigating these hallucinations [3].

Facts (3)

Sources
Why Do Large Language Models Hallucinate? | AWS Builder Center builder.aws.com AWS 1 fact
claimLarge Language Model (LLM) hallucinations are caused by three primary factors: data quality issues, model training methodologies, and architectural limitations.
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org The Journal of Nuclear Medicine 1 fact
claimSystematic data cleaning during preprocessing can reduce inconsistencies and improve data fidelity to mitigate hallucinations, although defining objective criteria for data quality standards remains a complex challenge.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 1 fact
claimEnhancing data quality and curation is critical for reducing hallucinations in AI models because inaccuracies or inconsistencies in training data can propagate errors in model outputs.