Relations (1)

related 13.00 — strongly supporting 5 facts

LLM-as-a-judge is a primary evaluation paradigm used to assess the performance and factual accuracy of various hallucination detection methods {fact:1, fact:13}. Research indicates that hallucination detection techniques often show significant performance drops when evaluated using LLM-as-a-judge compared to traditional metrics like ROUGE {fact:2, fact:10}, and some systems, such as Datadog's, explicitly incorporate LLM-as-a-judge as a core component of their hallucination detection procedure {fact:6, fact:7}.

Facts (5)

Sources
Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org arXiv 2 facts
claimLLM-as-Judge evaluation, when validated against human judgments, reveals significant performance drops across all hallucination detection methods when they are assessed based on factual accuracy.
procedureThe authors examined the agreement between various evaluation metrics and LLM-as-Judge annotations to evaluate and compare automatic labeling strategies for hallucination detection.
The Illusion of Progress: Re-evaluating Hallucination Detection in ... arxiv.org arXiv 1 fact
measurementSeveral established hallucination detection methods show performance drops of up to 45.9% when assessed using human-aligned metrics like LLM-as-Judge compared to traditional metrics.
Detecting hallucinations with LLM-as-a-judge: Prompt ... - Datadog datadoghq.com Aritra Biswas, Noé Vernier · Datadog 1 fact
procedureThe Datadog hallucination detection rubric requires the LLM-as-a-judge to provide a quote from both the context and the answer for each claim to ensure the generation remains grounded in the provided text.
Detect hallucinations in your RAG LLM applications with Datadog ... datadoghq.com Barry Eom, Aritra Biswas · Datadog 1 fact
procedureDatadog's hallucination detection feature utilizes an LLM-as-a-judge approach combined with prompt engineering, multi-stage reasoning, and non-AI-based deterministic checks.