claim
Large Language Model (LLM) hallucinations are defined as outputs that are factually incorrect, logically inconsistent, or inadequately grounded in reliable sources, according to Huang et al. (2023).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (1)
- hallucination concept