claim
KGHaluBench is a benchmark designed to evaluate the truthfulness of Large Language Models by decomposing the common hallucination rate into specific components to determine the knowledge level responsible for the hallucination.
Authors
Sources
- A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- KGHaluBench concept