claim
Hallucinations in Large Language Models are categorized into two main types: factuality hallucinations, which emphasize the discrepancy between generated content and verifiable real-world facts, and faithfulness hallucinations, which refer to the divergence of generated content from user instructions, provided context, or self-consistency.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
- The Hallucinations Leaderboard, an Open Effort to Measure ... huggingface.co via serper
Referenced by nodes (1)
- Large Language Models concept