claim
Introducing a "not sure" category in Large Language Model hallucination detection improves precision by allowing models to abstain from decisions when uncertainty is high.

Authors

Sources

Referenced by nodes (1)