claim
Hallucination in Large Language Models (LLMs) is defined as content generated by the model that is not present in the retrieved ground truth, as cited in Ji et al. (2023), Li et al. (2024), and Perković et al. (2024).

Authors

Sources

Referenced by nodes (2)