claim
Trustworthy Language Model (TLM) is a model uncertainty-estimation technique that wraps any LLM to estimate the trustworthiness of its responses.
Authors
Sources
- Benchmarking Hallucination Detection Methods in RAG - Cleanlab cleanlab.ai via serper
Referenced by nodes (1)
- Trustworthy Language Model concept