claim
High uncertainty in a Large Language Model's outputs, indicated by low sequence probabilities or high semantic entropy, suggests the model is generating content without strong grounding in its training data, as noted by Asgari et al. (2024) and Vishwanath et al. (2024).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (2)
- training data concept
- semantic entropy concept