claim
The EdinburghNLP awesome-hallucination-detection repository provides a taxonomy of error types for AI systems, including comprehension, factualness, specificity, and inference.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (2)
- inference concept
- factuality concept