procedure
LLM-based hallucination detection involves using a large language model to classify responses from a RAG system into categories such as context-conflicting hallucinations and facts.
Authors
Sources
- Detect hallucinations for RAG-based systems - AWS aws.amazon.com via serper
Referenced by nodes (1)
- RAG concept