Relations (1)

related 2.32 — strongly supporting 4 facts

Semantic entropy is a specialized uncertainty quantification method used to detect hallucinations and errors in Large Language Models, as established by research from Farquhar et al. (2024) and Ji et al. (2023) [1], [2], [3], and [4].

Facts (4)

Sources
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature 1 fact
referenceFarquhar et al. (2024) proposed using semantic entropy as a method for detecting hallucinations in large language models, published in Nature.
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org arXiv 1 fact
referenceUncertainty-based methods for hallucination detection in large language models include Perplexity (Ren et al., 2023), Length-Normalized Entropy (LN-Entropy) (Malinin and Gales, 2021), and Semantic Entropy (SemEntropy) (Farquhar et al., 2024), which utilize multiple generations to capture sequence-level uncertainty.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 1 fact
procedureUncertainty Quantification uses sequence log-probability and semantic entropy measures to identify potential areas of Clinical Data Fabrication and Procedure Description Errors in Large Language Models.
Detecting hallucinations with LLM-as-a-judge: Prompt ... - Datadog datadoghq.com Aritra Biswas, Noé Vernier · Datadog 1 fact
referenceJi, Z. et al. (2023) published 'Detecting hallucinations in large language models using semantic entropy' in Nature.