Relations (1)
related 1.00 — strongly supporting 1 fact
Hallucination and interpretability are related as both are key metrics used to evaluate the reliability and trustworthiness of large language models, as identified in the study by Zhang et al. (2023) [1].
Facts (1)
Sources
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 1 fact
claimZhang et al. (2023) identified reliability in LLMs by examining tendencies regarding hallucination, truthfulness, factuality, honesty, calibration, robustness, and interpretability.