claim
Intrinsic hallucinations in large language models occur when the model outputs statements that directly contradict the provided input, such as summarizing a source text with incorrect facts.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- Large Language Models concept