claim
Models trained on imbalanced datasets often extrapolate from unrelated patterns, producing erroneous or irrelevant outputs, according to Svenstrup et al. (2015) and Hegselmann et al. (2024b).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept