Relations (1)
related 12.00 — strongly supporting 12 facts
Knowledge graphs are utilized as structured data sources to ground LLM outputs and mitigate the risk of hallucinations [1], [2], [3], [4]. Furthermore, specialized frameworks like KGHaluBench, GraphEval, and KG-fpq leverage knowledge graph structures to detect, evaluate, and benchmark these hallucinations in language models [5], [6], [7], [8], [9].
Facts (12)
Sources
A Knowledge-Graph Based LLM Hallucination Evaluation Framework themoonlight.io 2 facts
procedureThe GraphEval framework detects hallucinations by using a pretrained Natural Language Inference (NLI) model to compare each triple in the constructed Knowledge Graph against the original context, flagging a triple as a hallucination if the NLI model predicts inconsistency with a probability score greater than 0.5.
claimThe GraphEval framework categorizes an entire LLM output as containing a hallucination if at least one triple within the constructed Knowledge Graph is flagged as inconsistent with the provided context.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org 2 facts
referenceKG-fpq is a framework for evaluating factuality hallucination in large language models using knowledge graph-based false premise questions.
claimThe authors of 'A Knowledge Graph-Based Hallucination Benchmark for Evaluating...' aggregate entity similarity with a bias toward semantic meaning to better capture the conceptual relationship between the LLM response and the entity description.
Enhancing LLMs with Knowledge Graphs: A Case Study - LinkedIn linkedin.com 1 fact
claimThe authors use a knowledge graph as a structured data source for LLM fact-checking to mitigate the risk of hallucination, which is defined as an LLM's tendency to generate erroneous or nonsensical text.
A Knowledge-Graph Based LLM Hallucination Evaluation Framework arxiv.org 1 fact
claimGraphEval identifies specific triples within a Knowledge Graph that are prone to hallucinations, providing insight into the location of hallucinations within an LLM response.
KGHaluBench: A Knowledge Graph-Based Hallucination ... researchgate.net 1 fact
claimKGHaluBench is a Knowledge Graph-based hallucination benchmark designed to evaluate Large Language Models.
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com 1 fact
claimFine-tuning an LLM on embedded graph data aligns the model's general language understanding with the structured knowledge from the KG, which improves contextual features, increases reasoning capabilities, and reduces hallucinations.
LLM Knowledge Graph: Merging AI with Structured Data - PuppyGraph puppygraph.com 1 fact
claimLLM knowledge graphs mitigate hallucinations by grounding responses in a verifiable knowledge graph, which enhances the trustworthiness of the output.
A knowledge-graph based LLM hallucination evaluation framework amazon.science 1 fact
claimThe GraphEval framework identifies hallucinations in Large Language Models by utilizing Knowledge Graph structures to represent information.
KG-RAG: Bridging the Gap Between Knowledge and Creativity - arXiv arxiv.org 1 fact
claimTransitioning from unstructured dense text representations to dynamic, structured knowledge representation via knowledge graphs can significantly reduce the occurrence of hallucinations in Language Model Agents by ensuring they rely on explicit information rather than implicit knowledge stored in model weights.
Practices, opportunities and challenges in the fusion of knowledge ... frontiersin.org 1 fact
claimUsers of collaborative Knowledge Graph and Large Language Model systems often require transparency regarding whether facts were retrieved from the Knowledge Graph or hallucinated by the Large Language Model, and expect systems to adapt reasoning based on evolving dialogue context.