Relations (1)
related 2.32 — strongly supporting 4 facts
Large Language Models are directly influenced by prompting strategies, which are used to guide their output generation [1] and mitigate hallucination risks [2]. The relationship is further evidenced by the probabilistic modeling of hallucinations based on these strategies [3] and academic research specifically analyzing the attribution of model behavior to these techniques [4].
Facts (4)
Sources
Survey and analysis of hallucinations in large language models frontiersin.org 2 facts
formulaHallucination events in Large Language Models can be represented probabilistically as random events, where H denotes hallucination occurrence conditioned upon prompting strategy P and model characteristics M, expressed as P(P, M|H) = (P(H|P, M) * P(P, M)) / P(H).
claimThe paper 'Survey and analysis of hallucinations in large language models: attribution to prompting strategies or model behavior' was published in Frontiers in Artificial Intelligence on September 30, 2025, by authors Anh-Hoang D, Tran V, and Nguyen L-M.
Combining large language models with enterprise knowledge graphs frontiersin.org 1 fact
procedurePrompting for Named Entity Recognition involves using entity definitions, questions, sentences, and output examples to guide Large Language Models in understanding entity types and extracting answers (Ashok and Lipton, 2023; Kholodna et al., 2024).
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com 1 fact
claimTechniques such as Retrieval-Augmented Generation (RAG), fact-checking pipelines, and improved prompting can significantly reduce, though not completely prevent, hallucinations in large language models.