Relations (1)
related 0.30 — supporting 3 facts
Large Vision-Language Models are conceptually linked to Large Language Models as they both serve as foundation models in healthcare [1] and share a common vulnerability to hallucinations [2], with the latter often being utilized as a component in detection methods for the former [3].
Facts (3)
Sources
Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org 2 facts
claimLarge Vision Language Models (LVLMs) inherit susceptibility to hallucinations from Large Language Models (LLMs), which poses significant risks in high-stakes medical contexts.
referenceHallucination detection methods for Large Vision Language Models are categorized into two groups: approaches based on off-the-shelf tools (using closed-source LLMs or visual tools) and training-based models (which detect hallucinations incrementally from feedback).
Medical Hallucination in Foundation Models and Their ... medrxiv.org 1 fact
claimFoundation models, including Large Language Models (LLM) and Large Vision Language Models (VLM), are used in healthcare for clinical decision support, medical research, and improving healthcare quality and safety.