claim
Medical Large Language Models (LLMs) exhibit overconfidence, which is linked to poor calibration and results in outputs presenting an unwarranted level of certainty, as noted by Cao et al. (2021) and Hagendorff et al. (2023).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- overconfidence bias concept