claim
Medical Large Language Models struggle to generalize beyond their training data when faced with rare diseases, novel treatments, or atypical clinical presentations, often producing erroneous or irrelevant outputs when trained on imbalanced datasets.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept