Relations (1)

related 2.32 — strongly supporting 4 facts

Large Language Models are directly linked to factuality through research evaluating their reliability and tendency to hallucinate, as evidenced by [1] and [2]. Furthermore, academic surveys and evaluation frameworks specifically dedicated to assessing the factuality of these models are documented in [3] and [4].

Facts (4)

Sources
Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com GitHub 1 fact
referenceThe paper "Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity" by Wang et al. (2023) provides a survey on the state of factuality in large language models, covering aspects of knowledge, retrieval, and domain-specificity.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 1 fact
claimZhang et al. (2023) identified reliability in LLMs by examining tendencies regarding hallucination, truthfulness, factuality, honesty, calibration, robustness, and interpretability.
Unknown source 1 fact
claimThe response verification framework described in the paper 'A Knowledge Graph-Based Hallucination Benchmark for Evaluating...' assesses the factuality of long-form text by identifying hallucinations in the output of Large Language Models.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org arXiv 1 fact
referenceThe paper 'Evaluating the factuality of large language models using large-scale knowledge graphs' is a cited reference regarding the evaluation of large language model factuality.