claim
Hallucination detection methods for Large Language Models are categorized into three groups: factual verification, summary consistency verification, and uncertainty-based hallucination detection.
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept