claim
Moradi et al. (2021) observe that a lack of structured input in medical data may confuse Large Language Models, leading them to replicate false patterns or irrelevant outputs.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept