claim
Medical Large Language Models (LLMs) exhibit overconfidence, which is linked to poor calibration and results in outputs presenting an unwarranted level of certainty, as noted by Cao et al. (2021) and Hagendorff et al. (2023).

Authors

Sources

Referenced by nodes (1)