claim
Unsupervised hallucination detection offers a scalable evaluation method for large language models without the generalization limitations and costly annotation processes associated with supervised approaches.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept