procedure
LLM-based hallucination detection involves using a large language model to classify responses from a RAG system into categories such as context-conflicting hallucinations and facts.

Authors

Sources

Referenced by nodes (1)