claim
Large Vision Language Models (LVLMs) inherit susceptibility to hallucinations from Large Language Models (LLMs), which poses significant risks in high-stakes medical contexts.

Authors

Sources

Referenced by nodes (3)