Relations (1)

related 11.00 — strongly supporting 11 facts

Knowledge graphs are frequently studied as a mechanism to mitigate hallucinations in large language models by providing structured, factual data {fact:1, fact:3, fact:5, fact:7, fact:9}. Conversely, the construction of knowledge graphs using these models can itself be prone to hallucinations [1], and the integration of the two remains a key research area for improving model reliability {fact:4, fact:6, fact:10, fact:11}.

Facts (11)

Sources
Combining Knowledge Graphs and Large Language Models - arXiv arxiv.org arXiv 2 facts
claimIncorporating knowledge graphs into large language models can mitigate issues like hallucinations and lack of domain-specific knowledge because knowledge graphs organize information in structured formats that capture relationships between entities.
claimUsing large language models to automate the construction of knowledge graphs carries the risk of hallucination or the production of incorrect results, which compromises the accuracy and validity of the knowledge graph data.
LLM-Powered Knowledge Graphs for Enterprise Intelligence and ... arxiv.org arXiv 1 fact
claimIntegrating large language models and knowledge graphs in enterprise contexts faces four key challenges: hallucination of inaccurate facts or relationships, data privacy and security concerns, computational overhead of running extraction at scale, and ontology mismatch when merging different knowledge sources.
KG-RAG: Bridging the Gap Between Knowledge and Creativity - arXiv arxiv.org arXiv 1 fact
claimTransitioning from unstructured dense text representations to dynamic, structured knowledge representation via knowledge graphs can significantly reduce the occurrence of hallucinations in Language Model Agents by ensuring they rely on explicit information rather than implicit knowledge stored in model weights.
[PDF] INTEGRATING KNOWLEDGE GRAPHS FOR HALLUCINATION ... papers.ssrn.com SSRN 1 fact
claimThe study titled 'INTEGRATING KNOWLEDGE GRAPHS FOR HALLUCINATION ...' investigates how integrating knowledge graphs into large language model inference pipelines mitigates hallucination.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 1 fact
claimThe integration of knowledge graphs into Large Language Models helps mitigate hallucinations, which are instances where models generate plausible but incorrect information, according to Lavrinovics et al. (2024).
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv 1 fact
claimLeveraging Knowledge Graphs to augment Large Language Models can help overcome challenges such as hallucinations, limited reasoning capabilities, and knowledge conflicts in complex Question Answering scenarios.
Beyond the Black Box: How Knowledge Graphs Make LLMs Smarter ... medium.com Vi Ha · Medium 1 fact
claimThe combination of Large Language Models (LLMs) and Knowledge Graphs (KGs) can be utilized to reduce hallucinations in artificial intelligence applications.
Empowering RAG Using Knowledge Graphs: KG+RAG = G-RAG neurons-lab.com Neurons Lab 1 fact
claimKnowledge Graphs help mitigate the hallucination problem in LLMs by enabling the extraction and presentation of precise factual information, such as specific contact details, which are difficult to retrieve through standard LLMs.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com GitHub 1 fact
referenceThe paper titled 'Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective' was published in Journal of Web Semantics in 2025.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer 1 fact
referenceAgrawal G, Kumarage T, Alghami Z, and Liu H authored the survey 'Can knowledge graphs reduce hallucinations in llms?: A survey', published as an arXiv preprint in 2022 (arXiv:2311.07914).