claim
The Trustworthy Language Model (TLM) consistently catches hallucinations with greater precision and recall than other LLM-based methods across four RAG benchmarks.
Authors
Sources
- Benchmarking Hallucination Detection Methods in RAG - Cleanlab cleanlab.ai via serper
Referenced by nodes (2)
- RAG concept
- Trustworthy Language Model concept