claim
Hallucination in Large Language Models (LLMs) is defined as content generated by the model that is not present in the retrieved ground truth, as cited in Ji et al. (2023), Li et al. (2024), and Perković et al. (2024).
Authors
Sources
- KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept