Relations (1)

related 2.32 — strongly supporting 4 facts

The concept 'Large Language Models' is the primary subject of the paper 'Survey and analysis of hallucinations in large language models: attribution to prompting strategies or model behavior', which provides a comprehensive empirical analysis of their hallucination behaviors [1]. The paper evaluates these models using specific benchmarks [2] and introduces a framework to detect hallucinations within them [3], as detailed in the publication [4].

Facts (4)

Sources
Unknown source 2 facts
claimThe authors of the paper 'Survey and analysis of hallucinations in large language models' introduce a novel framework designed to determine whether large language models are hallucinating.
claimThe authors of the paper 'Survey and analysis of hallucinations in large language models' present a comprehensive survey and empirical analysis of hallucination attribution in large language models.
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers 2 facts
claimThe paper 'Survey and analysis of hallucinations in large language models: attribution to prompting strategies or model behavior' was published in Frontiers in Artificial Intelligence on September 30, 2025, by authors Anh-Hoang D, Tran V, and Nguyen L-M.
procedureThe authors of the survey "Survey and analysis of hallucinations in large language models" conducted controlled experiments on multiple Large Language Models (GPT-4, LLaMA 2, DeepSeek, Gwen) using standardized hallucination evaluation benchmarks, specifically TruthfulQA, HallucinationEval, and RealToxicityPrompts.