claim
Medical Large Language Models (LLMs) exhibit confirmation bias when a model's response aligns too closely with a user's implied hypothesis, causing the model to neglect contradictory evidence.

Authors

Sources

Referenced by nodes (1)