claim
Intrinsic hallucinations in large language models occur when the model outputs statements that directly contradict the provided input, such as summarizing a source text with incorrect facts.

Authors

Sources

Referenced by nodes (1)