Relations (1)

related 2.00 — strongly supporting 3 facts

Prompting strategies are identified as a key factor in mitigating or influencing the occurrence of hallucinations in large language models, as evidenced by their role in reduction techniques [1], their analysis in academic literature [2], and their inclusion as a conditional variable in probabilistic models of hallucination [3].

Facts (3)

Sources
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers 2 facts
formulaHallucination events in Large Language Models can be represented probabilistically as random events, where H denotes hallucination occurrence conditioned upon prompting strategy P and model characteristics M, expressed as P(P, M|H) = (P(H|P, M) * P(P, M)) / P(H).
claimThe paper 'Survey and analysis of hallucinations in large language models: attribution to prompting strategies or model behavior' was published in Frontiers in Artificial Intelligence on September 30, 2025, by authors Anh-Hoang D, Tran V, and Nguyen L-M.
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com CloudThat 1 fact
claimTechniques such as Retrieval-Augmented Generation (RAG), fact-checking pipelines, and improved prompting can significantly reduce, though not completely prevent, hallucinations in large language models.