reference
The Hughes Hallucination Evaluation Model (HHEM) is a Transformer model trained by Vectara to distinguish between hallucinated and correct responses from various Large Language Models across different context and response data.
Authors
Sources
- Real-Time Evaluation Models for RAG: Who Detects Hallucinations ... cleanlab.ai via serper
Referenced by nodes (3)
- Large Language Models concept
- Vectara entity
- Transformer models concept