reference
Kossen et al. (2024) introduced 'Semantic Entropy Probes' as a method for robust and cheap hallucination detection in Large Language Models.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination detection concept