Relations (1)

related 3.32 — strongly supporting 9 facts

GraphEval is a framework specifically designed to evaluate LLM hallucinations by representing information through Knowledge Graph structures, as established in [1], [2], and [3]. The framework functions by constructing these Knowledge Graphs from LLM outputs to identify and verify specific triples for accuracy, as detailed in [4], [5], and [6].

Facts (9)

Sources
A Knowledge-Graph Based LLM Hallucination Evaluation Framework themoonlight.io The Moonlight 4 facts
procedureThe GraphEval framework detects hallucinations by using a pretrained Natural Language Inference (NLI) model to compare each triple in the constructed Knowledge Graph against the original context, flagging a triple as a hallucination if the NLI model predicts inconsistency with a probability score greater than 0.5.
claimGraphEval utilizes a structured knowledge graph approach to provide higher hallucination detection accuracy and to explain the specific locations of inaccuracies within Large Language Model outputs.
procedureThe GraphEval framework constructs a Knowledge Graph from LLM output through a four-step pipeline: (1) processing input text, (2) detecting unique entities, (3) performing coreference resolution to retain only specific references, and (4) extracting relations to form triples of (entity1, relation, entity2).
claimThe GraphEval framework categorizes an entire LLM output as containing a hallucination if at least one triple within the constructed Knowledge Graph is flagged as inconsistent with the provided context.
A knowledge-graph based LLM hallucination evaluation framework amazon.science Amazon Science 2 facts
referenceGraphEval is a hallucination evaluation framework that represents information using Knowledge Graph (KG) structures.
claimThe GraphEval framework identifies hallucinations in Large Language Models by utilizing Knowledge Graph structures to represent information.
A Knowledge-Graph Based LLM Hallucination Evaluation Framework arxiv.org arXiv 1 fact
claimGraphEval identifies specific triples within a Knowledge Graph that are prone to hallucinations, providing insight into the location of hallucinations within an LLM response.
A Knowledge-Graph Based LLM Hallucination Evaluation Framework semanticscholar.org Sansford, Richardson · Semantic Scholar 1 fact
claimGraphEval is a hallucination evaluation framework for Large Language Models that represents information using Knowledge Graph structures, as presented in the paper 'A Knowledge-Graph Based LLM Hallucination Evaluation Framework' by Sansford and Richardson.
Unknown source 1 fact
claimGraphEval is a knowledge-graph based LLM hallucination evaluation framework.