claim
Medical Large Language Models (LLMs) exhibit confirmation bias when a model's response aligns too closely with a user's implied hypothesis, causing the model to neglect contradictory evidence.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- confirmation bias concept