claim
Cleanlab's study on hallucination detection focuses on algorithms that determine when an LLM response, generated based on retrieved context, should not be trusted.
Authors
Sources
- Benchmarking Hallucination Detection Methods in RAG - Cleanlab cleanlab.ai via serper
Referenced by nodes (2)
- hallucination detection concept
- Cleanlab entity