claim
Glicksberg (2024) argues that Large Language Models trained on static or historical data may recommend ineffective treatments, thereby reducing clinical utility.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept