claim
Cleanlab’s Trustworthy Language Model (TLM) quantifies the trustworthiness of an LLM response using a combination of self-reflection, consistency across sampled responses, and probabilistic measures.
Authors
Sources
- Real-Time Evaluation Models for RAG: Who Detects Hallucinations ... cleanlab.ai via serper
Referenced by nodes (2)
- Cleanlab entity
- Trustworthy Language Model concept