claim
The degree of self-consistency in Large Language Model outputs serves as an indicator for hallucination detection, where higher consistency correlates with higher factual accuracy.

Authors

Sources

Referenced by nodes (2)