claim
Survey respondents identified limitations in training data and model architectures as key factors contributing to medical hallucinations in AI/LLM tools.

Authors

Sources

Referenced by nodes (3)