Relations (1)
related 2.81 — strongly supporting 6 facts
Large Language Models are frequently observed to exhibit overconfidence bias, a phenomenon where they generate incorrect or nonsensical information with unwarranted certainty as described in [1], [2], and [3]. This issue is linked to model calibration and decoding strategies [4], prompting research into uncertainty estimation techniques to mitigate these biases [5] and [6].
Facts (6)
Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org 3 facts
claimYuan et al. (2023) highlight the need for improved uncertainty estimation techniques to mitigate overconfidence in large language models.
claimLarge language models frequently exhibit overconfidence, generating outputs with high certainty even when the information is incorrect, which can mislead clinicians, as noted by Cao et al. (2021).
claimEncouraging large language models to output uncertainty estimates or alternative explanations can address overconfidence and premature closure biases, particularly when users are guided to critically evaluate multiple options.
Hallucinations in LLMs: Can You Even Measure the Problem? linkedin.com 1 fact
claimLarge Language Models (LLMs) often exhibit 'overconfidence bias,' which is the tendency to confidently deliver incorrect or nonsensical information.
Unknown source 1 fact
claimInference-related hallucinations in large language models result from decoding strategy randomness, over-confidence phenomena, and softmax bottleneck limitations.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org 1 fact
claimOverconfidence in Large Language Models (LLMs) is characterized by outputs that present an unwarranted level of certainty, a phenomenon linked to poor model calibration.