claim
Internal contradictions in a Large Language Model's response indicate the model is generating information without maintaining a coherent understanding of a medical case, which Sambara et al. (2024) identify as a sign of hallucination rather than reasoned analysis.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (1)
- hallucination concept