Relations (1)
related 1.58 — strongly supporting 2 facts
Large Language Models are related to interpretability because research aims to improve their logical consistency and reasoning through integration with Knowledge Graphs [1], and interpretability is a key metric used to evaluate the reliability and truthfulness of these models [2].
Facts (2)
Sources
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 1 fact
claimZhang et al. (2023) identified reliability in LLMs by examining tendencies regarding hallucination, truthfulness, factuality, honesty, calibration, robustness, and interpretability.
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org 1 fact
claimFuture research in Large Language Models (LLMs) and Knowledge Graphs (KGs) is expected to focus on integrating structured KGs into LLM reasoning mechanisms to enhance logical consistency, causal inference, and interpretability.