Relations (1)
related 2.58 — strongly supporting 5 facts
Large Language Models and Uncertainty quantification are related because UQ is specifically applied to LLMs through methods like logit-based, sampling-based, and verbalized confidence approaches [1], measures to detect errors in LLMs [2], and research papers such as Lin et al. (2023) on black-box LLMs [3] and Nikitin et al. (2024) on kernel language entropy [4], addressing unique challenges in LLMs [5].
Facts (5)
Sources
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org 2 facts
referenceLin et al. (2023) proposed a method for uncertainty quantification in black-box Large Language Models titled 'Generating with Confidence'.
referenceNikitin et al. (2024) introduced 'Kernel language entropy', a method for fine-grained uncertainty quantification for Large Language Models based on semantic similarities.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com 1 fact
referenceLarge Language Models and multimodal systems introduce unique uncertainty challenges, such as uncertainty compounding during autoregressive generation and dynamic shifts in uncertainty based on context, requiring uncertainty quantification approaches tailored to their specific characteristics rather than methods for simpler discriminative models, as cited in reference [56].
LLM Hallucination Detection and Mitigation: State of the Art in 2026 zylos.ai 1 fact
procedureUncertainty quantification in LLMs is primarily approached through three methods: logit-based methods (analyzing internal probability distributions), sampling-based methods (assessing variability across multiple generations), and verbalized confidence (prompting the model to express its own confidence).
Medical Hallucination in Foundation Models and Their ... medrxiv.org 1 fact
procedureUncertainty Quantification uses sequence log-probability and semantic entropy measures to identify potential areas of Clinical Data Fabrication and Procedure Description Errors in Large Language Models.