claim
Confirmation bias in Large Language Models (LLMs) occurs when a model's response aligns too closely with a user's implied hypothesis, resulting in the neglect of contradictory evidence.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- confirmation bias concept