Relations (1)

related 5.81 — strongly supporting 55 facts

Retrieval-Augmented Generation (RAG) is a framework specifically designed to ground Large Language Models (LLMs) in external, verified data to improve accuracy and reduce hallucinations [1], [2], [3]. Numerous research papers and frameworks, such as GraphRAG and CoT-RAG, demonstrate how RAG techniques are integrated with LLMs to enhance their reasoning, fact-checking, and domain-specific performance [4], [5], [6], [7].

Facts (55)

Sources
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv 9 facts
referenceBlendQA (Xin et al., 2025) is a question-answering dataset for Large Language Models and Knowledge Graphs that evaluates cross-knowledge source reasoning capabilities of Retrieval-Augmented Generation for question answering.
referenceLi et al. (2025a) proposed CoT-RAG, a framework that integrates chain of thought reasoning and retrieval-augmented generation to enhance reasoning capabilities in large language models (arXiv:2504.13534).
referenceFairness concerns remain in Retrieval-Augmented Generation (RAG) systems because Large Language Models can capture social biases from training data, and Knowledge Graphs may contain incomplete or biased knowledge, as noted by Wu et al. (2024b).
referencePG-RAG (Liang et al., 2024b) proposes dynamic and adaptable knowledge retrieval indexes based on Large Language Models to handle complex queries and improve the performance of Retrieval-Augmented Generation (RAG) systems in Question Answering tasks.
referenceLiHua-World (Fan et al., 2025) is a question-answering dataset for Large Language Models and Knowledge Graphs that evaluates the capability of Large Language Models on multi-hop question answering in the scenario of Retrieval-Augmented Generation.
referenceSTaRK (Wu et al., 2024a) is a question-answering dataset for Large Language Models and Knowledge Graphs that evaluates the performance of Large Language Model-driven Retrieval-Augmented Generation for question answering.
claimKnowledge graphs typically function as background knowledge when synthesizing large language models for complex question answering, with knowledge fusion and retrieval-augmented generation (RAG) serving as the primary technical paradigms.
referenceTian et al. (2025) conducted a systematic exploration of knowledge graph alignment with large language models in retrieval augmented generation.
referencemmRAG (Xu et al., 2025a) is a question-answering dataset for Large Language Models and Knowledge Graphs that evaluates multi-modal Retrieval-Augmented Generation, including question-answering datasets across text, tables, and Knowledge Graphs.
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers 3 facts
claimRetrieval-augmented generation (RAG) integrates external knowledge for grounding in large language models and has high feasibility via free toolkits.
perspectiveFuture research in AI hallucination mitigation should explore grounding techniques such as retrieval-augmented generation (RAG) and hybrid models that combine symbolic reasoning with large language models.
claimResearchers have attempted to reduce hallucinations in Large Language Models using prompting techniques including chain-of-thought prompting, self-consistency decoding, retrieval-augmented generation, and verification-based refinement.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 3 facts
procedureResearchers adapt LLMs for medicine using domain-specific corpora, instruction tuning, and retrieval-augmented generation (RAG) to align outputs with clinical practice, as described by Wei et al. (2022) and Lewis et al. (2020).
referenceA survey by Nazi and Peng (2024) provides a comprehensive review of LLMs in healthcare, highlighting that domain-specific adaptations like instruction tuning and retrieval-augmented generation can enhance patient outcomes and streamline medical knowledge dissemination, while noting persistent challenges regarding reliability, interpretability, and hallucination risk.
claimRobust finetuning procedures and retrieval-augmented generation can improve the balance of training data, which helps mitigate availability bias in large language models.
Integrating Knowledge Graphs into RAG-Based LLMs to Improve ... thesis.unipd.it Università degli Studi di Padova 3 facts
claimThe thesis research explores combining Large Language Models with knowledge graphs using the Retrieval-Augmented Generation (RAG) method to improve the reliability and accuracy of fact-checking.
claimThe thesis 'Integrating Knowledge Graphs into RAG-Based LLMs to Improve...' explores combining Large Language Models with knowledge graphs using the Retrieval-Augmented Generation (RAG) method to improve fact-checking reliability.
claimThe research thesis by Roberto Vicentini explores integrating knowledge graphs with Large Language Models using the Retrieval-Augmented Generation (RAG) method to improve the reliability and accuracy of fact-checking.
Grounding LLM Reasoning with Knowledge Graphs - arXiv arxiv.org arXiv 3 facts
claimRetrieval-Augmented Generation (RAG) and SQL-based querying are methods used to address the gap in LLM reliability, but they often fail to capture the dynamic relationships between concepts necessary for comprehensive understanding.
claimRecent research combines Retrieval-Augmented Generation (RAG) with structured knowledge, such as ontologies and knowledge graphs, to improve the factuality and reasoning capabilities of Large Language Models.
claimRetrieval-Augmented Generation (RAG) enables Large Language Models (LLMs) to ground their outputs in dynamically retrieved external evidence.
Unknown source 3 facts
referenceThe research paper titled 'CoT-RAG: Integrating Chain of Thought and Retrieval-Augmented Generation to Enhance Reasoning in Large Language Models' proposes a method that combines Chain of Thought prompting with Retrieval-Augmented Generation to improve the reasoning capabilities of large language models.
claimRetrieval Augmented Generation (RAG) integrates Large Language Models' capabilities with retrieval-based approaches to enhance correctness.
claimRetrieval-Augmented Generation (RAG), knowledge graphs, Large Language Models (LLMs), and Artificial Intelligence (AI) are increasingly being applied in knowledge-heavy industries, such as healthcare.
A self-correcting Agentic Graph RAG for clinical decision support in ... pmc.ncbi.nlm.nih.gov PMC 2 facts
claimRetrieval-Augmented Generation (RAG) is a method used to make Large Language Models less prone to hallucinating by grounding their output in retrieved data.
claimRetrieval-Augmented Generation (RAG) is utilized as a mitigation strategy to ground Large Language Models (LLMs) in external information.
How to Improve Multi-Hop Reasoning With Knowledge Graphs and ... neo4j.com Neo4j 2 facts
claimRetrieval-augmented generation (RAG) allows LLMs to ground responses in external data instead of relying solely on pretraining, which helps mitigate the risk of LLMs producing misleading or incorrect information.
claimGraphRAG is a retrieval-augmented generation (RAG) technique that utilizes a knowledge graph to enhance the accuracy, context, and explainability of responses generated by large language models (LLMs).
Detect hallucinations in your RAG LLM applications with Datadog ... datadoghq.com Barry Eom, Aritra Biswas · Datadog 2 facts
claimRetrieval-augmented generation (RAG) does not prevent hallucinations, as large language models can still fabricate responses while citing sources.
claimRetrieval-augmented generation (RAG) techniques aim to reduce hallucinations by providing large language models with relevant context from verified sources and prompting the models to cite those sources.
Efficient Knowledge Graph Construction and Retrieval from ... - arXiv arxiv.org arXiv 2 facts
claimStandard Retrieval-Augmented Generation (RAG) pipelines often return isolated snippets without understanding the relationships between them, which limits the ability of Large Language Models to synthesize logically coherent answers in high-stakes enterprise environments.
referenceBernal Jimenez Gutierrez et al. published 'HippoRAG: Neurobiologically Inspired Long-Term Memory for Large Language Models' in the Advances in Neural Information Processing Systems 37 in 2024.
Reducing hallucinations in large language models with custom ... aws.amazon.com Amazon Web Services 2 facts
claimRetrieval-Augmented Generation (RAG) systems use external knowledge sources to augment the output of large language models, which improves factual accuracy and reduces hallucinations.
procedureAmazon Bedrock Agents orchestrate multistep tasks by using the reasoning capabilities of Large Language Models to break down user-requested tasks into steps, create an orchestration plan, and execute that plan by invoking company APIs or accessing knowledge bases via Retrieval-Augmented Generation (RAG).
vectara/hallucination-leaderboard - GitHub github.com Vectara 1 fact
claimThe Vectara hallucination leaderboard serves as an indicator for the accuracy of Large Language Models when deployed in Retrieval Augmented Generation (RAG) and agentic pipelines, where the model acts as a summarizer of search results.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org arXiv 1 fact
referenceRetrieval-Augmented Generation (RAG) Language Models, including REALM (Guu et al. 2020), LAMA (Petroni et al. 2019), ISEEQ (Gaur et al. 2022), and RAG (Lewis et al. 2020), integrate a generator with a dense passage retriever and access to indexed data sources to add a layer of supervision to model outputs.
Knowledge intensive agents - ScienceDirect.com sciencedirect.com ScienceDirect 1 fact
claimRecent research studies in the field of artificial intelligence increasingly adopt an LLM-centric perspective, focusing on leveraging the capabilities of Large Language Models (LLMs) to improve Retrieval-Augmented Generation (RAG) performance.
[PDF] Injecting Knowledge Graph Embeddings into RAG Architectures ceur-ws.org CEUR-WS 1 fact
referenceThe research paper titled 'Injecting Knowledge Graph Embeddings into RAG Architectures' addresses the problem of fact-checking by injecting Knowledge Graph Embedding (KGE) vector representations into Large Language Models (LLMs) using a Retrieval Augmented Generation (RAG) framework.
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org arXiv 1 fact
referenceYuzhe Zhang, Yipeng Zhang, Yidong Gan, Lina Yao, and Chen Wang authored the paper 'Causal graph discovery with retrieval-augmented generation based large language models', published as arXiv preprint arXiv:2402.15301 in 2024.
Knowledge Graph Combined with Retrieval-Augmented Generation ... drpress.org Academic Journal of Science and Technology 1 fact
claimIntegrating Knowledge Graphs (KGs) with Retrieval-Augmented Generation (RAG) enhances the knowledge representation and reasoning abilities of Large Language Models (LLMs) by utilizing structured knowledge, which enables the generation of more accurate answers.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv 1 fact
claimRetrieval-augmented generation (RAG) techniques, which allow Large Language Models to access external knowledge dynamically, can help improve performance on unfamiliar clinical cases.
RAG Hallucinations: Retrieval Success ≠ Generation Accuracy linkedin.com Sumit Umbardand · LinkedIn 1 fact
claimLarge Language Models generate confident answers even when retrieved context is irrelevant, which introduces hallucinations into production RAG systems.
Knowledge Graph-extended Retrieval Augmented Generation for ... arxiv.org arXiv 1 fact
claimKnowledge Graph-extended Retrieval Augmented Generation (KG-RAG) is a specific form of Retrieval Augmented Generation (RAG) that integrates Knowledge Graphs with Large Language Models.
[PDF] A Systematic Exploration of Knowledge Graph Alignment with Large ... ojs.aaai.org AAAI 1 fact
claimRetrieval Augmented Generation (RAG) integrated with Knowledge Graphs (KGs) is an effective method for enhancing the performance of Large Language Models (LLMs).
LLM-empowered knowledge graph construction: A survey - arXiv arxiv.org arXiv 1 fact
claimIn Retrieval-Augmented Generation (RAG) frameworks, knowledge graphs serve as dynamic infrastructure providing factual grounding and structured memory for Large Language Models, rather than acting merely as static repositories for human interpretation.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com GitHub 1 fact
referenceThe paper 'LaRA: Benchmarking Retrieval-Augmented Generation and Long-Context LLMs -- No Silver Bullet for LC or RAG Routing' (arXiv, 2025) benchmarks retrieval-augmented generation and long-context Large Language Models.
A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org arXiv 1 fact
claimHippocampal indexing theory, as proposed by Teyler & DiScenna (1986), views the hippocampus as a pointer to neocortical memory and is used to enhance retrieval-augmented generation (Gutierrez et al., 2024) and counterfactual reasoning (Miao et al., 2024a) in LLMs.
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com CloudThat 1 fact
claimTechniques such as Retrieval-Augmented Generation (RAG), fact-checking pipelines, and improved prompting can significantly reduce, though not completely prevent, hallucinations in large language models.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer 1 fact
claimIntegrating knowledge graphs with large language models via Retrieval-augmented generation (RAG) allows the retriever to fetch relevant entities and relations from the knowledge graph, which enhances the interpretability and factual consistency of the large language model's outputs.
LLM-Powered Knowledge Graphs for Enterprise Intelligence and ... arxiv.org arXiv 1 fact
claimCombining large language models (LLMs) with retrieval-augmented generation (RAG) techniques enhances precision in contextual retrieval and entity-relationship extraction.
Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com M. Brenndoerfer · mbrenndoerfer.com 1 fact
claimRetrieval-augmented generation reduces hallucination for tail entities by providing factual grounding in the model's context window, allowing the model to utilize its in-context reasoning ability even when its parametric knowledge of the entity is weak.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature 1 fact
claimRetrieval-Augmented Generation (RAG) enables large language models to generate more precise and pertinent results by equipping them with domain-specific knowledge.
Unlock the Power of Knowledge Graphs and LLMs - TopQuadrant topquadrant.com Steve Hedden · TopQuadrant 1 fact
claimKnowledge graphs improve the accuracy and contextual understanding of large language models and generative AI through retrieval-augmented generation (RAG), prompt-to-query techniques, or fine-tuning.
Evaluating RAG applications with Amazon Bedrock knowledge base ... aws.amazon.com Amazon Web Services 1 fact
claimOrganizations building and deploying AI applications using large language models with Retrieval Augmented Generation (RAG) systems face challenges in evaluating AI outputs effectively throughout the application lifecycle.
Enterprise AI Requires the Fusion of LLM and Knowledge Graph stardog.com Stardog 1 fact
claimAli Ghodsi, the CEO of Databricks, suggests that Retrieval-Augmented Generation (RAG) is inadequate for enterprise use because most LLMs struggle to leverage the context pulled from vector databases.