claim
The Trustworthy Language Model (TLM) consistently catches hallucinations with greater precision and recall than other LLM-based methods across four RAG benchmarks.

Authors

Sources

Referenced by nodes (2)