concept

fact-checking

Facts (16)

Sources
Integrating Knowledge Graphs into RAG-Based LLMs to Improve ... thesis.unipd.it Università degli Studi di Padova 4 facts
claimCustom prompt engineering strategies are necessary for fact-checking systems because different LLMs benefit from different types of contextual information provided by knowledge graphs.
claimEffective fact-checking performance requires custom prompt engineering strategies because different Large Language Models benefit from different types of contextual information.
perspectiveGemini-1.5-Flash prioritizes balanced decision-making in fact-checking tasks, whereas GPT-4o-Mini is more effective at maximizing correct predictions, even if it favors the majority class.
claimThe research thesis by Roberto Vicentini explores integrating knowledge graphs with Large Language Models using the Retrieval-Augmented Generation (RAG) method to improve the reliability and accuracy of fact-checking.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 2 facts
claimQuestion answering (QA) is a fundamental component in artificial intelligence, natural language processing, information retrieval, and data management, with applications including text generation, chatbots, dialog generation, web search, entity linking, natural language query, and fact-checking.
claimLightweight answer validation in LLM+KG systems can be achieved using probabilistic logic programs and bloom filter sketches with KG-based fact-checking, as an alternative to relying solely on LLMs for guardrails.
GraphCheck: Breaking Long-Term Text Barriers with Extracted ... pmc.ncbi.nlm.nih.gov PMC 1 fact
claimGraphCheck is a graph-enhanced framework designed to address the problem of long text fact-checking by utilizing extracted knowledge graphs.
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented ... arxiv.org arXiv Mar 18, 2025 1 fact
claimKnowledge Graph-based Question Answering (KBQA) has been applied across various domains, including text understanding and fact-checking.
[PDF] Injecting Knowledge Graph Embeddings into RAG Architectures ceur-ws.org CEUR-WS 1 fact
referenceThe research paper titled 'Injecting Knowledge Graph Embeddings into RAG Architectures' addresses the problem of fact-checking by injecting Knowledge Graph Embedding (KGE) vector representations into Large Language Models (LLMs) using a Retrieval Augmented Generation (RAG) framework.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv Mar 3, 2025 1 fact
procedureThe interactive self-reflection methodology (Ji et al., 2023) for medical question answering systems proceeds in two steps: (1) initiate with a knowledge acquisition prompt to generate relevant biomedical concepts for a patient presentation, and (2) perform iterative fact-checking queries to verify consistency between generated concepts and current medical guidelines.
Enhancing LLMs with Knowledge Graphs: A Case Study - LinkedIn linkedin.com LinkedIn Nov 7, 2023 1 fact
claimThe authors use a knowledge graph as a structured data source for LLM fact-checking to mitigate the risk of hallucination, which is defined as an LLM's tendency to generate erroneous or nonsensical text.
Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com GitHub 1 fact
referenceZhang et al. (2025) published 'CORRECT: Context- and Reference-Augmented Reasoning and Prompting for Fact-Checking' in the proceedings of NAACL 2025.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer Nov 4, 2024 1 fact
claimAutomated fact-checking processes for LLM-KG systems may overlook context-specific nuances or inaccuracies, making it difficult to guarantee fully reliable outputs.
The Hallucinations Leaderboard, an Open Effort to Measure ... huggingface.co Hugging Face Jan 29, 2024 1 fact
claimThe Hallucination Leaderboard includes tasks across several categories: Closed-book Open-domain QA (NQ Open, TriviaQA, TruthfulQA), Summarisation (XSum, CNN/DM), Reading Comprehension (RACE, SQuADv2), Instruction Following (MemoTrap, IFEval), Fact-Checking (FEVER), Hallucination Detection (FaithDial, True-False, HaluEval), and Self-Consistency (SelfCheckGPT).
LLM Hallucinations: Causes, Consequences, Prevention - LLMs llmmodels.org llmmodels.org May 10, 2024 1 fact
procedureHuman oversight as a mitigation strategy for large language model hallucinations involves implementing fact-checking processes and involving human evaluators.
LLM-KG4QA: Large Language Models and Knowledge Graphs for ... github.com GitHub 1 fact
referenceThe paper 'Can Knowledge Graphs Make Large Language Models More Trustworthy?' is a research work focused on the integration of knowledge graphs with LLMs for fact-checking and grounding.