Relations (1)

related 2.81 — strongly supporting 6 facts

GraphEval is a framework specifically designed to identify and categorize hallucinations in Large Language Model outputs by analyzing Knowledge Graph structures [1], [2], and [3]. It further provides methodologies for rectifying these detected hallucinations [4] using NLI models to verify factual consistency [5].

Facts (6)

Sources
A Knowledge-Graph Based LLM Hallucination Evaluation Framework themoonlight.io The Moonlight 4 facts
procedureThe GraphEval framework detects hallucinations by using a pretrained Natural Language Inference (NLI) model to compare each triple in the constructed Knowledge Graph against the original context, flagging a triple as a hallucination if the NLI model predicts inconsistency with a probability score greater than 0.5.
claimThe integration of GraphCorrect with GraphEval provides a methodology for rectifying hallucinations in Large Language Model outputs, with potential applications in fields requiring factual correctness such as medical advice or legal documentation.
claimThe authors of the GraphEval framework focus on detecting hallucinations within a defined context rather than identifying discrepancies between LLM responses and broader training data.
claimThe GraphEval framework categorizes an entire LLM output as containing a hallucination if at least one triple within the constructed Knowledge Graph is flagged as inconsistent with the provided context.
A Knowledge-Graph Based LLM Hallucination Evaluation Framework arxiv.org arXiv 1 fact
claimGraphEval identifies specific triples within a Knowledge Graph that are prone to hallucinations, providing insight into the location of hallucinations within an LLM response.
A knowledge-graph based LLM hallucination evaluation framework amazon.science Amazon Science 1 fact
claimThe GraphEval framework identifies hallucinations in Large Language Models by utilizing Knowledge Graph structures to represent information.