claim
Large language models developed explicitly for medical purposes remain vulnerable to domain-specific hallucinations, which often arise from reasoning failures rather than mere knowledge gaps.

Authors

Sources

Referenced by nodes (1)