claim
Clinically oriented Large Language Models (LLMs) produce hallucinations that are exacerbated by the complexity and specificity of medical knowledge, where subtle differences in terminology or reasoning lead to significant misunderstandings.

Authors

Sources

Referenced by nodes (1)