claim
Large Language Models frequently exhibit overconfidence, where they generate incorrect information with high certainty, and poor calibration, where confidence scores do not align with prediction accuracy, can mislead clinicians into trusting inaccurate outputs.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept