claim
LLM-as-Judge evaluation, when validated against human judgments, reveals significant performance drops across all hallucination detection methods when they are assessed based on factual accuracy.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (3)
- hallucination detection concept
- LLM-as-a-judge concept
- factual correctness concept