claim
Trustworthy Language Model (TLM) is a model uncertainty-estimation technique that wraps any LLM to estimate the trustworthiness of its responses.

Authors

Sources

Referenced by nodes (1)