claim
Erroneous outputs produced by Large Language Models (LLMs) often exhibit patterns that resemble human cognitive biases in clinical reasoning, despite LLMs not possessing human psychology.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- cognitive bias concept