claim
High uncertainty in a Large Language Model's outputs, indicated by low sequence probabilities or high semantic entropy, suggests the model is generating content without strong grounding in its training data, as noted by Asgari et al. (2024) and Vishwanath et al. (2024).

Authors

Sources

Referenced by nodes (2)