claim
Large language model hallucinations in clinical settings can undermine the reliability of AI-generated medical information, potentially leading to adverse patient outcomes.

Authors

Sources

Referenced by nodes (1)