claim
Unsupervised methods for detecting hallucinations in large language models estimate uncertainty using token-level confidence from single generations, sequence-level variance across multiple samples, or hidden-state pattern analysis.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (2)
- hallucination concept
- Large Language Models concept