claim
Medical Large Language Models (LLMs) exhibit availability bias, manifesting as a tendency to propose diagnoses or treatments that are disproportionately represented in the model's training data.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- training data concept