procedure
Each hallucination detection method in the Cleanlab benchmark takes a user query, retrieved context, and LLM response as input and returns a score between 0 and 1 indicating the likelihood of hallucination.
Authors
Sources
- Benchmarking Hallucination Detection Methods in RAG - Cleanlab cleanlab.ai via serper
Referenced by nodes (1)
- Cleanlab entity