claim
RAG systems may produce incorrect responses if the retrieved context lacks the necessary information due to suboptimal search, poor document chunking or formatting, or the absence of information in the knowledge database, causing the LLM to hallucinate an answer from its training set.
Authors
Sources
- Benchmarking Hallucination Detection Methods in RAG - Cleanlab cleanlab.ai via serper
Referenced by nodes (1)
- RAG systems concept