claim
Medical hallucinations in Large Language Models (LLMs) pose serious risks because incorrect dosages, drug interactions, or diagnostic criteria can lead to life-threatening outcomes.

Authors

Sources

Referenced by nodes (2)