claim
Large Language Model (LLM) hallucinations are defined as outputs that are factually incorrect, logically inconsistent, or inadequately grounded in reliable sources, according to Huang et al. (2023).

Authors

Sources

Referenced by nodes (1)