Relations (1)
related 2.32 — strongly supporting 4 facts
The concept of hallucination is defined and detected by comparing LLM-generated triples against a provided context [1], [2], and [3]. Furthermore, the context is used as a reference point to rectify identified hallucinations through an iterative correction process [4].
Facts (4)
Sources
A Knowledge-Graph Based LLM Hallucination Evaluation Framework themoonlight.io 4 facts
procedureThe GraphEval framework detects hallucinations by using a pretrained Natural Language Inference (NLI) model to compare each triple in the constructed Knowledge Graph against the original context, flagging a triple as a hallucination if the NLI model predicts inconsistency with a probability score greater than 0.5.
procedureThe GraphCorrect strategy rectifies hallucinations by identifying inconsistent triples, sending the problematic triple and context back to an LLM to generate a corrected version, and substituting the new triple into the original output to ensure localized correction without altering unaffected sections.
claimThe authors of the GraphEval framework focus on detecting hallucinations within a defined context rather than identifying discrepancies between LLM responses and broader training data.
claimThe GraphEval framework categorizes an entire LLM output as containing a hallucination if at least one triple within the constructed Knowledge Graph is flagged as inconsistent with the provided context.