claim
The Trustworthy Language Model (TLM) can score the trustworthiness of responses from any LLM and can be wrapped around any LLM to obtain model uncertainty estimates.
Authors
Sources
- Benchmarking Hallucination Detection Methods in RAG - Cleanlab cleanlab.ai via serper
Referenced by nodes (1)
- Trustworthy Language Model concept