claim
Hallucinations in Large Language Models are categorized into two main types: factuality hallucinations, which emphasize the discrepancy between generated content and verifiable real-world facts, and faithfulness hallucinations, which refer to the divergence of generated content from user instructions, provided context, or self-consistency.

Authors

Sources

Referenced by nodes (1)