procedure
The GraphEval and GraphCorrect framework detects hallucinations by extracting knowledge graph triples from LLM output and comparing their entailment against provided context, then corrects them by prompting an LLM to generate factually correct triples and replacing the non-factual information.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (1)
- GraphEval concept