claim
Prompt-induced Hallucinations in medical LVLMs are induced by prompts containing confused information, often arising from a lack of plausibility or factuality in the prompt, and are used to test the model's robustness in specific contexts.

Authors

Sources

Referenced by nodes (1)