procedure
Prompt-based strategies encourage Large Language Models (LLMs) to self-assess confidence, while post-hoc calibration techniques like temperature scaling or external calibrators adjust logits or embedding representations (Whitehead et al., 2022; Xie et al., 2024; Tian et al., 2023).
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- embeddings concept