claim
Current metrics for evaluating LLM responses and detecting hallucinations are limited by a lack of explainable decisions, an inability to systematically check all information in a response, and high computational costs.
Authors
Sources
- A Knowledge-Graph Based LLM Hallucination Evaluation Framework arxiv.org via serper
Referenced by nodes (1)
- hallucination concept