claim
Hallucinations in Large Language Models are categorized into two primary sources: prompting-induced hallucinations caused by ill-structured or misleading prompts, and model-internal hallucinations caused by architecture, pretraining data distribution, or inference behavior.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept