claim
Prompting-induced hallucinations in large language models often arise from ambiguous formulations or a lack of context, which causes the model to rely on probabilistic associations rather than grounded knowledge.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- Large Language Models concept
- prompt-induced hallucination concept