claim
Large language model hallucinations in clinical settings can undermine the reliability of AI-generated medical information, potentially leading to adverse patient outcomes.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper