reference
Lin et al. (2023) proposed a method for uncertainty quantification in black-box Large Language Models titled 'Generating with Confidence'.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- Uncertainty quantification concept