Relations (1)

related 0.50 — strongly supporting 5 facts

KGHaluBench is a specialized benchmark explicitly designed to evaluate the truthfulness and hallucination rates of Large Language Models [1], [2], [3]. The framework assesses these models by utilizing knowledge graphs to generate challenging questions [4] and has been used to measure the performance of 25 frontier Large Language Models [5].

Facts (5)

Sources
KGHaluBench: A Knowledge Graph-Based Hallucination ... researchgate.net ResearchGate 2 facts
claimKGHaluBench assesses Large Language Models across the breadth and depth of their knowledge.
claimKGHaluBench is a Knowledge Graph-based hallucination benchmark designed to evaluate Large Language Models.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... aclanthology.org Alex Robertson, Huizhi Liang, Mahbub Gani, Rohit Kumar, Srijith Rajamohan · Association for Computational Linguistics 2 facts
procedureThe KGHaluBench framework utilizes a knowledge graph to dynamically construct challenging, multifaceted questions for LLMs, with question difficulty statistically estimated to address popularity bias.
measurementThe authors of KGHaluBench evaluated 25 frontier Large Language Models using novel accuracy and hallucination metrics.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org arXiv 1 fact
claimKGHaluBench is a benchmark designed to evaluate the truthfulness of Large Language Models by decomposing the common hallucination rate into specific components to determine the knowledge level responsible for the hallucination.